New Nvidia GPU architecture achieves 'Holy Grail' of computer graphics

August 14, 2018 // By Rich Pell
Chipmaker Nvidia (Santa Clara, CA) has announced a new computer graphics chip architecture that it says makes real-time ray tracing possible and is "the greatest leap since the invention of the CUDA GPU in 2006."

The new Turing GPU architecture features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible, the company says. The two engines — along with more powerful compute for simulation and enhanced rasterization — are claimed to usher in a new generation of hybrid rendering to address the $250 billion visual effects industry, and enable cinematic-quality interactive experiences, new effects powered by neural networks, and fluid interactivity on highly complex models.

"Turing is NVIDIA's most important innovation in computer graphics in more than a decade," says Jensen Huang, founder and CEO of NVIDIA. "Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment, and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry."

The result of more than 10,000 engineering-years of effort, the new architecture's hybrid rendering capabilities enable applications to simulate the physical world at six times the speed of the previous Pascal generation, says the company. To help developers take full advantage of these capabilities, the company has enhanced its RTX development platform with new AI, ray-tracing, and simulation SDKs.

The Turing architecture's dedicated ray-tracing processors (RT Cores) accelerate the computation of how light and sound travel in 3D environments at up to 10 GigaRays a second. Turing accelerates real-time ray tracing operations by up to 25x that of the previous Pascal generation, and GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes.

Turing's Tensor Cores - processors that accelerate deep learning training and inferencing - provide up to 500 trillion tensor operations a second. This can power AI-enhanced features for creating applications with new capabilities, including DLAA (deep learning anti-aliasing - a breakthrough in high-quality motion image generation), denoising, resolution scaling, and video re-timing.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.