writes “Apple is pitching Mac OS X 10.8 (Mountain Lion) as the cat’s meow, with over 200 new features ‘that add up to an amazing Mac experience’ — but that only applies if you’re rocking a compatible system. Some older Mac models, including ones that are 64-bit capable, aren’t invited to the Mountain Lion party, and it’s likely because of the GPU. It’s being reported (unofficially) that an updated graphics architecture intended to smooth out performance in OS X’s graphics subsystem is the underlying issue. It’s no coincidence, then, that the unsupported GPUs happen to be ones that were fairly common back before 64-bit support became mainstream.”
Source: OS X 10.8 (Mountain Lion) Won’t Support Some 64-bit Macs With Older GPUs
Categories: slashdot bit, GPUs, graphics architecture, graphics subsystem, lion, Mac, mac models, Mac OS, mac os x, mountain, mountain lion, support
writes “Today at the GeForce LAN taking place in Shanghai, NVIDIA’s CEO Jen Hsun Huang unveiled the company’s upcoming dual-GPU powered, flagship graphics card, the GeForce GTX 690. The GeForce GTX 690 will feature a pair of fully-functional GK104 “Kepler” GPUs. If you recall, the GK104 is the chip powering the GeForce GTX 680, which debuted just last month. On the upcoming GeForce GTX 690, each of the GK104 GPUs will also be paired to its own 2GB of memory (4GB total) via a 256-bit interface, resulting in what is essentially GeForce GTX 680 SLI on a single card. The GPUs on the GTX 690 will be linked to each other via a PCI Express 3.0 switch from PLX, with a full 16 lanes of electrical connectivity between each GPU and the PEG slot. Previous dual-GPU powered cards from NVIDIA relied on the company’s own NF200, but that chip lacks support for PCI Express 3.0, so NVIDIA opted for a third party solution this time around.”
Source: NVIDIA Unveils Dual-GPU Powered GeForce GTX 690
Categories: slashdot GeForce, GPU, GPUs, GTX, jen hsun huang, Kepler, LAN, Nvidia, pci express, PEG, plx, Shanghai
The next version of Chrome will help older computers catch up with rapidly accelerating Web-based graphics.. The upcoming Chrome release will improve the performance of hardware-accelerated 2D animations using Canvas, which include many Web-based games and other graphically-intensive sites.
It will also let systems with older GPUs use SwiftShader for 3D graphics instead of WebGL, which older GPUs can’t handle. It won’t look quite as good, but users with older systems will still get more 3D content than they currently can. These new Chrome beta with these features is available today.
Many of Google’s recent browser-based updates have pushed the envelope on hardware performance. For example, in October, Google released 3D views in Google Maps that use WebGL, so lower-end GPUs can’t display them. Even some relatively new laptops can’t handle WebGL. The new SwiftShader capabilities in Chrome will bring some these 3D graphics to less capable systems.
Other recent Chrome releases contained advanced audio APIs and the ability to run native code inside the browser. Others focused on speeding up page loads by pre-caching pages. Chrome engineers are even building new image formats to push the Web forward. These uncompromising updates were moving pretty quickly for a while, so the next version of Chrome will let older computers catch up.
If you feel like testing Google’s browser capabilities as soon as they come out of the shop, jump in the Chrome beta channel.
Source: New Chrome Beta Improves 2D & 3D Graphics for Older Systems
Categories: readwriteweb Beta Improves, browser capabilities, caching pages, capable systems, channel source, Chrome, D Graphics, google, google maps, GPUs, version, WebGL
asgard4 writes “In recent years GPUs have become powerful computing devices whose power is not only used to generate pretty graphics on screen but also to perform heavy computation jobs that were exclusively reserved for high performance super computers in the past. Considering the vast diversity and rapid development cycle of GPUs from different vendors, it is not surprising that the ecosystem of programming environments has flourished fairly quickly as well, with multiple vendors, such as NVIDIA, AMD, and Microsoft, all coming up with their own solutions on how to program GPUs for more general purpose computing (also abbreviated GPGPU) applications. With OpenCL (short for Open Computing Language) the Khronos Group provides an industry standard for programming heavily parallel, heterogeneous systems with a language to write so-called kernels in a C-like language. The OpenCL Programming Guide gives you all the necessary knowledge to get started developing high-performing, parallel applications for such systems with OpenCL 1.1.”
Keep reading for the rest of asgard4′s review.
|OpenCL Programming Guide
||Aaftab Munshi, Benedict R. Gaster, Timothy G. Mattson, James Fung, Dan Ginsbur
||Addison-Wesley Pearson Educatio
||A solid introduction to programming with OpenCL.
Source: Book Review: OpenCL Programming Guide
Categories: slashdot asgard, Benedict R. Gaster, computing, computing language, Dan Ginsbur, GPUs, James Fung, opencl, programming, programming environments, rapid development cycle, Timothy G. Mattson, wesley pearson, Wesley Pearson Educatio
writes “The world’s largest genome sequencing center once needed four days to analyze data describing a human genome. Now it needs just six hours. The trick is servers built with graphics chips — the sort of processors that were originally designed to draw images on your personal computer. They’re called graphics processing units, or GPUs — a term coined by chip giant Nvidia. This fall, BGI — a mega lab headquartered in Shenzhen, China — switched to servers that use GPUs built by Nvidia, and this slashed its genome analysis time by more than an order of magnitude.”
Source: Chinese Lab Speeds Through Genome Processing With GPUs
Categories: slashdot China, Eric Smalley, genome, genome analysis, genome sequencing center, GPUs, graphics chips, human genome, mdash, Nvidia, processing, Shenzhen, shenzhen china
wintertargeter writes “Yeah, it’s another article on security, but this time we finally get a complete picture. Tom’s Hardware looks at WPA/WPA2 brute-force cracking with CPUs, GPUs, and Amazon’s Nvidia Tesla-based EC2 cloud servers. Verdict? WPA/WPA2 is pretty damn secure. Now to wait for a side-channel attack. Sigh….”
Source: WPA/WPA2 Cracking With CPUs, GPUs, and the Cloud
Vigile writes “John Carmack sat down for an interview during Quakecon 2011 to talk about the future of technology for gaming. He shared his thoughts on the GPU hardware race (hardware doesn’t matter but drivers are really important), integrated graphics solutions on Sandy Bridge and Llano (with a future of shared address spaces they may outperform discrete GPUs) and of course some thoughts on ‘infinite detail’ engines (uninspired content viewed at the molecular level is still uninspired content). Carmack does mention a new-found interest in ray tracing, and how it will ‘eventually win’ the battle for rendering in the long run.”
Source: Carmack On ‘Infinite Detail,’ Integrated GPUs, and Future Gaming Tech
Categories: slashdot !infinite, address spaces, Future, future of technology, gaming, GPUs, hardware, infinite detail, John Carmack, race hardware, Sandy Bridge
Vigile writes “For users that have known about the process of bitcoin mining the obvious tool for the job has been the GPU. Miners have been buying up graphics cards during sales across the web but which GPUs offer the most dollar efficient, power efficient and quickest payoff for the bitcoin currency? A series of tests over at PC Perspective goes through 16 different GPU configurations including older high-end cards through modern low-cost options and even a $1700+ collection with multiple dual-GPU cards installed. The article gives details on how the mining programs work, why GPUs are faster than CPUs inherently and why AMD seems to be so much faster than NVIDIA.”
Source: Bitcoin Mining Tests On 16 NVIDIA and AMD GPUs
Categories: slashdot AMD, Bitcoin, CPUs, dual gpu, efficient power, GPU, GPUs, graphics cards, miners, mining, PC Perspective
writes “The Adler Planetarium has finished a major two-year upgrade project that’s replaced the facility’s forty year-old Zeiss Mark VI projector with a ‘Digital Starball’ system designed by Global Immersion Ltd. The new digital system is powered by an array of NVIDIA Quadro GPUs. The specs behind the system are impressive. The 71-foot dome of the Grainger Sky Theater now contains a score of military-grade projectors with an 8kx8k resolution. The final 64 megapixel image is generated by an array of 42 NVIDIA Quadro GPUs and offers an unprecedented degree of real-time modeling horsepower. The planetarium’s model of the universe was created in part from high-definition photos captured around the world and via the Hubble telescope.”
Source: GPU-Powered Planetarium Renders 64MP Projection
Categories: slashdot Adler Planetarium, Array, GPUs, hubble telescope, Mark VI, megapixel image, Nvidia, nvidia quadro gpus, Quadro, Sky Theater, system
An anonymous reader writes “We all know that brute-force attacks with a CPU are slow, but GPUs are another story. Tom’s Hardware has an interesting article up on WinZip and WinRAR encryption strength, where they attempt to crack passwords with Nvidia and AMD graphic cards. Some of their results are really fast — in the billions of passwords per second — and that’s only with two GTX 570s!”
Source: Brute-Force Password Cracking With GPUs
Categories: slashdot anonymous reader, are, brute force password, CPU, encryption strength, GPUs, graphic cards, mdash, reader, Tom, tom s hardware