Nvidia, RTX 2080, Turing and ray tracing, oh my!

In the world of advanced computing and next generation software development and deployment, any time Nvidia makes a move or releases a new product is noteworthy. Given that much of the newer, more computing intensive, more powerful artificial intelligence, neural network or deep learning functionalities increasingly fall on GPUs in addition to (or in lieu of) CPUs, when Nvidia changes their game, they essentially change the entire game.

Welp. Nvidia’s newest release has been ten (10) years in the making. So, yeah… I’d say this is a pretty big one.

RTX 2080

Just last month, Nvidia released their newest graphics architecture, and it’s a complete departure from what’s come before it from the company. Take a look:

The concept is simple enough — it’s no longer just a game of packing the most computing power into a processor or processing cores into a machine to achieve maximal performance. To truly see next level performance increases (and enable new functionalities), the top companies have to completely rethink how CPU and GPU are architected, how they work in concert together, how cacheing can be optimized, etc. “Nvidia, Intel, AMD, Samsung, Apple, and many others will increasingly need to do more with the existing transistors on chips instead of continuing to shrink their size. Nvidia has clearly realized this inevitability, and it’s time for a change of pace,” Tom Warren writes for The Verge.

So that’s just what Nvidia did.

Nvidia Turing architecture = ray tracing and realtime AI processing

Nvidia is calling their new GPU architecture ‘Turing’ after the father of modern computing. The GPU sports both Tensor cores and RT cores — RT dedicated to ray tracing and tensor cores focused on AI processing. Turing also comes with major upgrades to the GPU’s caches as well. “Nvidia has moved to a unified memory structure with larger unified L1 caches and double the amount of L2 cache,” Warren continued. “It’s essentially a rewrite of the memory architecture, and the company claims the result is 50 percent performance improvement per core.”

So why does this all matter beyond some basic performance enhancements?

Because ray tracing is the holy grail of hyperrealistic gameplay, and RTX 2080 could be a breakthrough in that direction.

Ray tracing, for the uninitiated (which to be honest, is probably 99+% of people)  is a rendering technique used by movie studios to generate light reflections and cinematic effects. Essentially, if there’s an explosion in Avengers: Infinity War, light doesn’t just come directly from the flames. It bounces off the shiny space ships, reflects of water in a given atmosphere, etc, etc.

For movies you can do this because the rendering doesn’t have to happen in real time. You could program that into the graphical edit, let your beast machine chomp on the render over night, and then export a beautiful, cinema-quality final cut with ray tracing included.

Doing it in a video game is a completely different animal. The GPU has to track your movement, POV, gameplay, variable action, and on and on while it attempts to render ray traced, hyperrealistic imagery… IN REAL TIME. That’s no joke.

But if Nvidia’s RTX 2080 delivers on what the company promises, we might be there.

But that’s not all! Nvidia also is taking its deep reservoir of AI expertise and applying it to their new chip architecture and U/X. From The Verge again:

Nvidia Deep Learning Super-Sampling (DLSS) could be the most important part of the company’s performance improvements. DLSS is a method that uses Nvidia’s supercomputers and a game-scanning neural network to work out the most efficient way to perform AI-powered antialiasing. The supercomputers will work this out using early access copies of the game, with these instructions then used by Nvidia’s GPUs. Think of it like the supercomputer working out the best way to render graphics, then passing that hard-won knowledge onto users’ PCs. It’s a complex process, but the end result should be improved image quality and performance whether you’re playing online or offline.

What all that amounts to is that Nvidia is making a play to change the way we think about both GPUs, games, gaming, as well as what AI computing could look like well into the future. If the RTX 2080 and its brethren (and new and improved Turing architected GPUs to come) can deliver the goods, it very well could change the entire face of modern computing once more.



Comments

One response to “Nvidia, RTX 2080, Turing and ray tracing, oh my!”

  1. 2019 is the year of … - AI+ NEWS says:

    […] processing and virtual assistants. Processors are moving away from the lowly CPU and more toward powerhouse GPUs and purpose-built Intelligence Processing Units (IPUs). 2018 truly was a watershed year for […]

Leave a Reply

Your email address will not be published. Required fields are marked *

2 × 5 =

Jeff Francis

Jeff Francis is a veteran entrepreneur and founder of Dallas-based digital product studio ENO8. Jeff founded ENO8 to empower companies of all sizes to design, develop and deliver innovative, impactful digital products. With more than 18 years working with early-stage startups, Jeff has a passion for creating and growing new businesses from the ground up, and has honed a unique ability to assist companies with aligning their technology product initiatives with real business outcomes.

Get In The Know

Sign up for power-packed emails to get critical insights into why software fails and how you can succeed!

EXPERTISE, ENTHUSIASM & ENO8: AT YOUR SERVICE

Whether you have your ducks in a row or just an idea, we’ll help you create software your customers will Love.

LET'S TALK

When Will Your Software Need to Be Rebuilt?

When the software starts hobbling and engineers are spending more time fixing bugs than making improvements, you may find yourself asking, “Is it time to rebuild our software?” Take this quiz to find out if and when to rebuild.

 

is it time to rebuild our software?