Sunday, 14 June 2015

NVIDIA GeForce GTX Titan X

NVIDIA GeForce GTX Titan X

Go big, or go home - and NVIDIA’s going very big!

Just when we thought NVIDIA couldn’t release a more expensive product, along comes its latest premium product, the Titan X. Unlike previous Titans, this isn’t just a slightly enhanced version of its flagship GPU; NVIDIA has gone the whole hog and developed a product t hat is a huge step up from the GTX 980. Whether or not it’s worth the audacious price tag is another question entirely.

MAXWELL WRIT LARGE


Like all of NVIDIA’s existing graphics cards, the Titan X is based on its Maxwell 2 architecture. They’ve basically taken the GM204 chip found at the heart of the GTX 980, and increased every internal section by 50%, creating the new GM200 chip that powers the Titan X. NVIDIA’s CUDA Cores are the units that handle the heavy lifting inside Maxwell, and the GM200 now ships with 3072 of these, a 50% increase on the 2048 found within the GM204. The texture units have had a similar increase, rising from 128 to 192, while the ROPs (Render Output Unit) have also increased by 50%, up to 96 from the GM204’s 64. As a result, the overall size of the GPU has increased by just under 50%, making this a brute of a chip, measuring 601 square millimetres. It’s no wonder that it ships at a slightly slower clockspeed than the GTX 980, with a base speed of 1000MHz, compared to the GTX 980’s 1126MHz. The Boost Clock speed – the frequency the GPU increases to when under load – has also dropped, down to 1075MHz from the GTX 980’s 1216MHz.

Feeding such a powerful GPU requires some serious memory bandwidth, and NVIDIA has increased the memory bus width to 384-bits, up from the 256-bit bus of the GTX 980. The GDDR5 memory is still clocked at 7GHz, the same as the GTX 980, but NVIDIA has tripled the amount of onboard memory, up from 4GB on the GTX 980 to an incredible 12GB on the Titan X. With one of these cards in place, even GTAV won’t be able to fill its memory buffer, no matter how high you crank the resolution and detail. With DirectX 12 due in the near future, NVIDIA has ensured the Titan X is fully compliant with this exciting new API. In fact, Titan X supports DirectX 12.1, a feature that is also found on the latest GTX 9XX series of cards.

The huge increase in complexity sees the GM200 using an incredible eight billion transistors in its construction, a huge increase on the 5.2 billion used in the GM204. They’re still built using the trusted 28 nanometre process that NVIDIA has been using for several years now, yet thanks to Maxwell’s excellent energy efficiency, the GPU has a TDP of just 250W. Compare this to AMD’s R9 290X, which has a TDP of 290W, and we can see that NVIDIA’s attention to energy consumption has really paid off. It simply wouldn’t have been possible to build such a large chip if Maxwell’s design wasn’t inherently energy efficient.

Due to the fact that the GPU doesn’t double as a room heater, NVIDIA hasn’t had to develop a new cooler for the Titan X. It’s dusted off the trusty reference cooler design for use on the Titan X, adopting the exact same heatsink and fan combination as the GTX 980. At its heart is the same blower-style fan that grabs air from inside your case, and blows it out the rear of the PCI slots. This is used to cool the copper vapour chamber that extracts heat from the GPU, and as expected it’s relatively quiet compared to noisier cards, though an audible hum is to be expected during heavy usage. One change to the cooler is the sexy all-black finish, with the Titan brand name emblazoned on one end. The card itself retains the same 10.5 inch length seen with other high end graphics cards, and swallows up two slots in your case, meaning it should fit inside even smaller Mini-ITX systems without too much of an issue. Compared to the 12 incher that is the Radeon R9 295X2, the Titan X presents much fewer space obstacles for system builders. Power is delivered courtesy of two power connectors, one being six-pin, the other eight-pin. A 6+2 power phase design is standard, and NVIDIA allows the user to increase the power limit by 10%, upping the TDP to 275W. Overclockers will find NVIDIA’s strict voltage regulations are still enforced, with the 1.162V base voltage only adjustable up to a mere 1.23V.

As far as outputs go, the Titan X is equipped with the same range of ports as the GTX 980. A dual-link DVI port sits alongside a HDMI 2.0 connection, which makes this a potent partner to power 4K TVs also equipped with HDMI 2.0, as it allows full 4K resolution at 60Hz. Three full-sized DisplayPort outputs round out the connectivity options. It’s possible to drive up to four displays simultaneously with the Titan X, and the huge amount of onboard memory means it’s got the capacity to handle stupidly high resolutions with ease.

The same features found on other Maxwell-based products are found here on the Titan X. Multi-Frame Samples Antialiasing (or MFAA for short) delivers excellent antialiasing with a low performance overhead, and is now supported by most games thanks to NVIDIA’s excellent driver support. We love NVIDIA’s Dynamic Super Resolution, which makes downsampling from higher resolutions an absolute breeze, while support for Voxel Global Illumination should result in even more realistic lighting within games, should developers choose to support this technology. NVIDIA’s moon landing demo uses this technology to show off just how realistic this real-time lighting system is, and it’s almost photorealistic. G-Sync support is also included, and a range of 4k displays with G-sync compatibility are set to arrive on the market soon; the Titan X will be the perfect partner for such displays.

With so much hype around Virtual Reality at the moment, and the Rift’s consumer release date finally set for quarter one in 2016, NVIDIA has wisely included VR Direct support. This cuts the latency encountered when rendering a frame, and also adds support for Asynchronous Warp. According to NVIDIA, this lets the GPU update the last scene rendered based on the player's head position. By warping the image later in the rendering pipeline, Maxwell cuts discontinuities between head movement and action on screen.

So far, so very good, but one area where Titan X is set to disappoint is its lack of support for compute operations. Prior Titans turned out to be relatively cheap products compared to NVIDIA’s Tesla range of compute graphics cards, but were adept at compute intensive tasks thanks to its strong FP64 compute performance. Titan X does not follow this trend. Instead, it has a native FP64 rate of just 1/32, making it unsuitable for compute intensive operations. There’s a reason for this trade-off though; with the GM200 needing so many transistors focused on graphics, NVIDIA simply didn’t have room to include the additional compute ALUs found on prior Titans.

HOW DOES TITAN X PERFORM?


Given the huge 50% increase in internal components, it’s easy to assume that Titan X will deliver 50% higher framerates than the GTX 980, but the slower clockspeed of Titan X means it’s not the case. Instead we expected to see around a 40% performance increase, and threw a bunch of games at the Titan X to see how accurate our predictions were. Our testbench consisted of the i7-4790K Devils Canyon processor, mounted on an ASUS Maximus VII Hero motherboard with 16GB of DDR3 1800 memory. A Corsair Neutron GTX SSD handled our Windows 8.1 64-bit install, and we used the latest publicly available drivers, in the form of the 350.12 driver set. We used an ASUS PB287Q display to test the card’s 4K performance.

The first test was Grid Autosport, running at Ultra detail with a resolution of 3840 x 2160. The Titan X happily demolished the GTX 980 by 37%. Next up was the demanding Shadow of Mordor benchmark, and we tested with the same detail and resolution as Grid Autosport. This time around the Titan X took the lead by a healthy 41%. Finally, 3DMark’s FireStrike demo was used to push both cards to their limit, and once again the Titan X took the lead handily, this time by 32%.

IS IT WORTH IT?


Considering the Titan X costs 110% more than a GTX 980, yet only yields a performance benefit of 40% at most, many will wonder how NVIDIA can justify the Titan X’s hefty price tag. But that would miss the point of this product; it’s aimed at those gamers who don’t flinch at spending $1600 for the best GPU on the market. However, the Titan X faces the strongest competition from dual-GPU configurations. It’s possible to buy two GTX 980s for the same price as one Titan X, and they’ll easily wipe the floor with the it, provided the game has decent SLI scaling. Ditto with AMD’s R9 295X2, which can now be had for just $1099, and will happily run rings around a single Titan X. Personally speaking, we’d definitely opt for two GPUs rather than a single Titan X, but for those who don’t want to deal with the sometimes buggy performance of dual-GPU systems, the Titan X offers by far the fastest single-GPU performance around. BENNETT RING

VERDICT
While dual-GPUs will run rings around the Titan X for the same price, there’s no denying that it’s the fastest single-GPU gaming graphics card ever made.