Archive

Posts Tagged ‘GPUs’

OS X 10.8 (Mountain Lion) Won’t Support Some 64-bit Macs With Older GPUs

July 11th, 2012 07:09 admin View Comments

Desktops (Apple)

MojoKid writes “Apple is pitching Mac OS X 10.8 (Mountain Lion) as the cat’s meow, with over 200 new features ‘that add up to an amazing Mac experience’ — but that only applies if you’re rocking a compatible system. Some older Mac models, including ones that are 64-bit capable, aren’t invited to the Mountain Lion party, and it’s likely because of the GPU. It’s being reported (unofficially) that an updated graphics architecture intended to smooth out performance in OS X’s graphics subsystem is the underlying issue. It’s no coincidence, then, that the unsupported GPUs happen to be ones that were fairly common back before 64-bit support became mainstream.”

Source: OS X 10.8 (Mountain Lion) Won’t Support Some 64-bit Macs With Older GPUs

NVIDIA Unveils Dual-GPU Powered GeForce GTX 690

April 29th, 2012 04:53 admin View Comments

Graphics

MojoKid writes “Today at the GeForce LAN taking place in Shanghai, NVIDIA’s CEO Jen Hsun Huang unveiled the company’s upcoming dual-GPU powered, flagship graphics card, the GeForce GTX 690. The GeForce GTX 690 will feature a pair of fully-functional GK104 “Kepler” GPUs. If you recall, the GK104 is the chip powering the GeForce GTX 680, which debuted just last month. On the upcoming GeForce GTX 690, each of the GK104 GPUs will also be paired to its own 2GB of memory (4GB total) via a 256-bit interface, resulting in what is essentially GeForce GTX 680 SLI on a single card. The GPUs on the GTX 690 will be linked to each other via a PCI Express 3.0 switch from PLX, with a full 16 lanes of electrical connectivity between each GPU and the PEG slot. Previous dual-GPU powered cards from NVIDIA relied on the company’s own NF200, but that chip lacks support for PCI Express 3.0, so NVIDIA opted for a third party solution this time around.”

Source: NVIDIA Unveils Dual-GPU Powered GeForce GTX 690

New Chrome Beta Improves 2D & 3D Graphics for Older Systems

February 9th, 2012 02:40 admin View Comments

chrome_logo150150.pngThe next version of Chrome will help older computers catch up with rapidly accelerating Web-based graphics.. The upcoming Chrome release will improve the performance of hardware-accelerated 2D animations using Canvas, which include many Web-based games and other graphically-intensive sites.

It will also let systems with older GPUs use SwiftShader for 3D graphics instead of WebGL, which older GPUs can’t handle. It won’t look quite as good, but users with older systems will still get more 3D content than they currently can. These new Chrome beta with these features is available today.

webpvspng.png

Many of Google’s recent browser-based updates have pushed the envelope on hardware performance. For example, in October, Google released 3D views in Google Maps that use WebGL, so lower-end GPUs can’t display them. Even some relatively new laptops can’t handle WebGL. The new SwiftShader capabilities in Chrome will bring some these 3D graphics to less capable systems.

Other recent Chrome releases contained advanced audio APIs and the ability to run native code inside the browser. Others focused on speeding up page loads by pre-caching pages. Chrome engineers are even building new image formats to push the Web forward. These uncompromising updates were moving pretty quickly for a while, so the next version of Chrome will let older computers catch up.

If you feel like testing Google’s browser capabilities as soon as they come out of the shop, jump in the Chrome beta channel.

Source: New Chrome Beta Improves 2D & 3D Graphics for Older Systems

Book Review: OpenCL Programming Guide

January 20th, 2012 01:28 admin View Comments

Image

asgard4 writes “In recent years GPUs have become powerful computing devices whose power is not only used to generate pretty graphics on screen but also to perform heavy computation jobs that were exclusively reserved for high performance super computers in the past. Considering the vast diversity and rapid development cycle of GPUs from different vendors, it is not surprising that the ecosystem of programming environments has flourished fairly quickly as well, with multiple vendors, such as NVIDIA, AMD, and Microsoft, all coming up with their own solutions on how to program GPUs for more general purpose computing (also abbreviated GPGPU) applications. With OpenCL (short for Open Computing Language) the Khronos Group provides an industry standard for programming heavily parallel, heterogeneous systems with a language to write so-called kernels in a C-like language. The OpenCL Programming Guide gives you all the necessary knowledge to get started developing high-performing, parallel applications for such systems with OpenCL 1.1.” Keep reading for the rest of asgard4′s review.

OpenCL Programming Guide
author Aaftab Munshi, Benedict R. Gaster, Timothy G. Mattson, James Fung, Dan Ginsbur
pages 603
publisher Addison-Wesley Pearson Educatio
rating 9/10
reviewer asgard4
ISBN 0321749642
summary A solid introduction to programming with OpenCL.

Source: Book Review: OpenCL Programming Guide

Chinese Lab Speeds Through Genome Processing With GPUs

January 8th, 2012 01:54 admin View Comments

China

Eric Smalley writes “The world’s largest genome sequencing center once needed four days to analyze data describing a human genome. Now it needs just six hours. The trick is servers built with graphics chips — the sort of processors that were originally designed to draw images on your personal computer. They’re called graphics processing units, or GPUs — a term coined by chip giant Nvidia. This fall, BGI — a mega lab headquartered in Shenzhen, China — switched to servers that use GPUs built by Nvidia, and this slashed its genome analysis time by more than an order of magnitude.”

Source: Chinese Lab Speeds Through Genome Processing With GPUs

WPA/WPA2 Cracking With CPUs, GPUs, and the Cloud

August 15th, 2011 08:33 admin View Comments

Security

wintertargeter writes “Yeah, it’s another article on security, but this time we finally get a complete picture. Tom’s Hardware looks at WPA/WPA2 brute-force cracking with CPUs, GPUs, and Amazon’s Nvidia Tesla-based EC2 cloud servers. Verdict? WPA/WPA2 is pretty damn secure. Now to wait for a side-channel attack. Sigh….”

Source: WPA/WPA2 Cracking With CPUs, GPUs, and the Cloud

Carmack On ‘Infinite Detail,’ Integrated GPUs, and Future Gaming Tech

August 12th, 2011 08:20 admin View Comments

Graphics

Vigile writes “John Carmack sat down for an interview during Quakecon 2011 to talk about the future of technology for gaming. He shared his thoughts on the GPU hardware race (hardware doesn’t matter but drivers are really important), integrated graphics solutions on Sandy Bridge and Llano (with a future of shared address spaces they may outperform discrete GPUs) and of course some thoughts on ‘infinite detail’ engines (uninspired content viewed at the molecular level is still uninspired content). Carmack does mention a new-found interest in ray tracing, and how it will ‘eventually win’ the battle for rendering in the long run.”

Source: Carmack On ‘Infinite Detail,’ Integrated GPUs, and Future Gaming Tech

Bitcoin Mining Tests On 16 NVIDIA and AMD GPUs

July 13th, 2011 07:00 admin View Comments

Bitcoin

Vigile writes “For users that have known about the process of bitcoin mining the obvious tool for the job has been the GPU. Miners have been buying up graphics cards during sales across the web but which GPUs offer the most dollar efficient, power efficient and quickest payoff for the bitcoin currency? A series of tests over at PC Perspective goes through 16 different GPU configurations including older high-end cards through modern low-cost options and even a $1700+ collection with multiple dual-GPU cards installed. The article gives details on how the mining programs work, why GPUs are faster than CPUs inherently and why AMD seems to be so much faster than NVIDIA.”

Source: Bitcoin Mining Tests On 16 NVIDIA and AMD GPUs

GPU-Powered Planetarium Renders 64MP Projection

July 12th, 2011 07:01 admin View Comments

Graphics

MojoKid writes “The Adler Planetarium has finished a major two-year upgrade project that’s replaced the facility’s forty year-old Zeiss Mark VI projector with a ‘Digital Starball’ system designed by Global Immersion Ltd. The new digital system is powered by an array of NVIDIA Quadro GPUs. The specs behind the system are impressive. The 71-foot dome of the Grainger Sky Theater now contains a score of military-grade projectors with an 8kx8k resolution. The final 64 megapixel image is generated by an array of 42 NVIDIA Quadro GPUs and offers an unprecedented degree of real-time modeling horsepower. The planetarium’s model of the universe was created in part from high-definition photos captured around the world and via the Hubble telescope.”

Source: GPU-Powered Planetarium Renders 64MP Projection

Brute-Force Password Cracking With GPUs

June 20th, 2011 06:25 admin View Comments

Encryption

An anonymous reader writes “We all know that brute-force attacks with a CPU are slow, but GPUs are another story. Tom’s Hardware has an interesting article up on WinZip and WinRAR encryption strength, where they attempt to crack passwords with Nvidia and AMD graphic cards. Some of their results are really fast — in the billions of passwords per second — and that’s only with two GTX 570s!”

Source: Brute-Force Password Cracking With GPUs

YOYOYOOYOYOYO