Leaked Benchmark, New Turing Architecture – 600% Faster than Pascal?
NVIDIA just uncovered their new Turing Architecture for GPUs at the recent SIGGRAPH 2018 Keynote. The new architecture brings many new technologies which includes GDDR6 Memory, Tensor Cores, Real-Time Ray Tracing, AI-enhanced Deep Learning Anti Aliasing known as DLAA and much more. And for the next-gen GeForce GPUs, we finally seem to have a name – RTX 2080.
The SIGGRAPH 2018 Keynote was focused towards content creators, but the introduction of the Turing Architecture does give us many new insights about what new features and technologies would be implemented in the Turing-based RTX 2080 GPU. Let’s have a look.
New Turing-based GeForce RTX 2080 GPU to be unveiled at NVIDIA’s next event in Cologne on 20th August 2018
NVIDIA recently released a new teaser video titled ‘Be For The Game’ – it starts with hardware enthusiasts unboxing an NVIDIA GPU, moving on to showcase of high-end gaming setups and then people starting up their gaming rigs, inviting people to play. The video ends with the hashtag #BeForTheGame and details about NVIDIA’s next event, which is in Cologne and will be held on August 20th 2018.
How we finally have a name – RTX 2080
Several hints are present in the video that confirm this name. In-game aliases of people in different sequences of the video were AlanaT Mac-20, Eight-Tee, Not11 and RoyTeX, and in the ending sequence, the dates also pop up in such a way that the numbers 2, 0, 8 and 0 come up one by one. All these are pointing to the fact that the new high-end GeForce GPU will be called RTX 2080 and will feature the Turing Architecture. Nice hints there, NVIDIA. We finally seem to have a name!
Ashes Of The Singularity Leaked Benchmark – RTX 2080 better than Titan V?
This leak comes from Wccftech. The leaked benchmark is using the Core i7-6850K and an unidentified ‘NVIDIA Graphics Device’ on DirectX 12. The settings were maxed out and the scores which we’re seeing here are beating Titan V! These scores shouldn’t be possible by any presently released GPU and it seems like the graphics card used was based on the Turing Architecture.
This benchmark, if done on the RTX 2080 or any other Turing GPU, used early drivers. The GPU was most likely an early engineering sample as well. Considering these things the performance is still mind-boggling and I expect the final version with optimized drivers & updates to run even better.
The Quadro RTX Professional GPUs – All the new technologies that come with it and how they could be relevant to the Turing-based GeForce RTX GPUs intended for Gaming
Quadro RTX GPUs are used in intense visual computing operations, and they’re mostly used by content creators of movies, designers of architecture & automobiles, and for visualization of scientific data. We can’t be sure which technologies will be implemented in the GeForce variants of the Turing Architecture GPUs, but the new Quadro RTX GPUs will be using all these new features & technologies –
Up to 48GB of GDDR6 VRAM from Samsung
Real-Time Ray Tracing using RT Cores
Tensor Cores for AI-based rendering
Turing Streaming Multiprocessor Architecture (with up to 4608 CUDA Cores)
USB-Type C and VirtualLink Connectors
NVIDIA NVLink – New high-speed linking technology to combine two GPUs, allow scaling of VRAM up to 96GB and up to 1000GB/s Data Transfer Speeds
NVIDIA announced three new Quadro RTX GPUs at the event. They will be released in Q4 2018. Pricing starts from $2300 and goes all the way up to $10,000. The Quadro RTX GPUs will be released in Q4 2018. We don’t have the full specifications but here is what we have right now, this also includes a comparison to the older Volta-based Quadro GV100 GPU –
Bob Pette, Vice President of Professional Visualization at NVIDIA said –
Quadro RTX marks the launch of a new era for the global computer graphics industry. Users can now enjoy powerful capabilities that weren’t expected to be available for at least five more years. Designers and artists can interact in real time with their complex designs and visual effects in ray-traced photo-realistic detail. And film studios and production houses can now realize increased throughput with their rendering workloads, leading to significant time and cost savings.
That’s quite a big advancement. Now, let’s talk about some of the advancements, features and technologies that are relevant in Gaming –
NVIDIA also said that the Turing Architecture will be 6 times faster than the Pascal Architecture, which translates to 600% faster speeds –
But this is just for content creation. We can’t really expect a 600% performance improvement in gaming with the RTX 2080, but judging by this tremendous improvement we can expect NVIDIA to pull off something similar with their next lineup of Turing-based GeForce GPUs.
New AI Based Anti Aliasing – NVIDIA DLAA
Nowadays, games use popular anti-aliasing technologies like TXAA, SSAA, MSAA, FXAA and others. There hasn’t been a big breakthrough in Anti-Aliasing technology since quite a while now but now the Turing Architecture introduces NVIDIA’s new Deep Learning Anti Aliasing, called DLAA. This new Anti-Aliasing Technology will be powered by Augmented Reality and NVIDIA describes it as ‘a breakthrough in high-quality motion image generation’.
We’ll have to wait and see how game developers implement this technology in their games, but since NVIDIA has a track record in having closed sourced technologies (like TXAA), so I doubt it would ever become mainstream. DLAA will probably be implemented in some NVIDIA Partnered GameWorks Titles. Nevertheless, it seems very promising and I’m very excited to see what game developers will be doing with this new technology.
The Quadro RTX Cards are the first NVIDIA GPUs ever to use GDDR6 Memory. This is produced by Samsung and the highest end Quadro RTX 8000 features a whopping 48GB of GDDR6 VRAM. Also, with NVIDIA’s new NVLink technology, this can be scaled up to 96GB of GDDR6 VRAM. Awesome.
This is the first generation of GDDR6 Memory and we’re yet to see SK Hynix and Micron show off their GDDR6 Memory, but this first gen Samsung GDDR6 Memory does have a marginal improvement over the GTX 1080 Ti and GTX 1080’s GDDR5X Memory. Here is a memory specification comparison sheet comparing Titan V with HBM2, GeForce GTX 1080 & 1080 Ti with GDDR5X up against the new Quadro RTX 8000 with GDDR6 Memory –
Keep in mind that this comparison is with the Quadro RTX GPUs intended for content creation and not gaming. We don’t really have specific performance metrics on GDDR6 Memory right now, but looking at these specifications, the first-gen GDDR6 is at least 200% faster than the late-gen GDDR5 VRAM used in most GPUs right now, and 40% faster compared to GDDR5X VRAM. The RTX 2080 will surely feature better implementation of GDDR6 Memory that will be focused towards use in gaming instead of content creation.
The new technologies & advancements introduced in the Turing architecture are absolutely amazing, no doubt there. NVIDIA’s recent teaser video on YouTube and their subtle hints are clearly pointing out that something big is coming, and I’m very excited to see what kind of performance improvements could be brought with the Turing Architecture for Gaming. NVIDIA very much did surprise everyone at the SIGGRAPH 2018 Keynote.
August 20th 2018 – The countdown begins. What do you think about the RTX 2080 and the Turing Architecture? Will NVIDIA DLAA become mainstrem? Let’s discuss in the comments.
Thanks for reading. To read more about Gaming Hardware, click here.
Hey! I'm Satyam. I mainly write on Technology and Gaming Hardware for Esportsportal. I love gaming, travelling, books, and music. My gaming alias is Midnight, and I usually play PUBG, CSGO, GTA V and some coop games.