Tuesday 23 February 2016

Gaming CPUs and the rise of DX12

Gaming CPUs and the rise of DX12

We often refer to the holy trinity of gaming performance here at PC PowerPlay, which pertains to the three major components within your system that will impact on frame rates. Your graphics card is obviously of paramount importance, as it’s tasked with the final steps of the rendering process, and the better your GPU the higher the resolution, anti-aliasing and other graphical effects you can run. System memory is also key, as if the game runs out of memory it’ll have to fetch data from the sluggish hard drive, causing stuttering and slow downs. The final piece of the performance pie is one that is often overlooked – the Central Processing Unit, or CPU. The following article will explain how certain CPU specifications impact game performance, and how the introduction of DX12 is going to unleash its power.

KEY CPU TERMS


Your CPU is really the beating heart of your system, and as such it’s responsible for running the dozens of processes that Windows requires. However, it’s also incredibly important when gaming, and here are the various specifications that can impact game performance.

Frequency
About a decade ago, frequency was king. This is the speed at which the CPU executes operations, and the faster the frequency, the better your games will. This is measured in MHz or GHz (the latter is 1000MHz). Back in the early 2000s, the competition between AMD and Intel was all about which chips could get to the highest frequency. Unfortunately CPUs hit a wall when it comes to frequency, as the faster they run, the hotter they get. This is why we’ve seen Intel’s CPUs plateau around the 4GHz mark for around five years. Instead, they’re now focusing on better IPC and more cores.

Instructions per Cycle (IPC)
This refers to how many instructions are executed per clock cycle, and the more it can handle, the better the CPU performs. Back in the early days of PC gaming, both AMD and Intel had very similar IPC rates, which is why frequency became so important. However, when the Pentium 4 was launched it had a significantly lower IPC than AMD chips. When comparing the Pentium 4 against an AMD chip running at the same frequency, the AMD chip ran rings around the Pentium 4. It was only when Intel moved to the Core design that it regained the lead in IPC. Today AMD has a slight lead in frequency, but a rather large deficit in IPC, which is why Intel chips are significantly better performers in games.

CPU Core
Today’s CPU’s are comprised of between two and eight cores. Each core is basically a processor unto itself, though the multiple cores do share some overall resources such as the cache. The reason we have multiple CPU cores today is that the quest to increase CPU frequency hit a brick – or should we say thermal – wall back in the mid-2000s. Intel and AMD realised they couldn’t keep increasing CPU frequency without major heat issues, so their solution was to double up on the number of CPU cores. This sounds all well and good, but the issue is that software development for the previous thirty years had focused on single core coding. The move to multiple core programming, also known as multithreaded programming, was much trickier than anybody envisaged, and it has taken ten years for game developers to finally get their heads around it. Having said that, it’s rare for a game to make use of more than four cores today, which is why AMD’s octacored chips don’t have a performance benefit over Intel’s quad-cored chips when it comes to most games.

Cache
The CPU’s cache is extremely fast memory embedded into the chip. The more cache present, the faster the processor can fetch instructions, and more is always better as it also impacts on the IPC of the chip. However, compared to IPC and Frequency, cache is one of the lesser determinants of speed. Note that there are several different types of cache present on the CPU: L1, L2, L3 and L4.

Hyper-Threading
This Intel-proprietary technology basically allows a single CPU core to act like two, and in the best possible circumstances allows each core to run two simultaneous streams of instructions, doubling performance. This is most obvious when running non-gaming applications, but there’s a huge amount of debate about whether Hyper-Threading is good or bad for game performance. Unfortunately the answer isn’t simple, as it depends entirely on the game itself. Numerous tests have shown that some games prefer Hyper-Threading, such as Skyrim, while others can actually see a performance decrease, such as ARMA3. Generally speaking though, as games are becoming more multi-threaded, Hyper-Threading should have a slightly positive impact on performance, though nowhere near a doubling of performance. And when there is a performance decrease, it’s so small that the benefits of Hyper-Threading in all other usage scenarios mean it’s probably worth keeping on.

DIRECTX 12 IS A GAME CHANGER FOR CPU PERFORMANCE


While our PCs generally run rings around the consoles when it comes to performance, there’s one area where consoles have a huge advantage – draw calls. Without spending 30 pages explaining exactly how graphics are made, basically speaking whenever a scene is to be rendered, the CPU has to first simulate the base model of the scene, and then send off instructions to the GPU to finally draw this scene. The more detailed each scene, the more draw calls the CPU needs to make. Consoles have a huge advantage here, as game developers c an “code to the metal”. That is, they know exactly what hardware will be within each platform, so they can extract the maximum possible draw calls out of the CPU. This is why Assassin’s Creed Unity on the consoles could pump out around 50,000 draw calls per frame.

But the PC is a very different beast to the consoles. Developers have to take into account dozens of different CPU types and speeds, so they have to write to the API, or Application Programming Interface, which in the case of Windows is DirectX. It makes coding for PC much simpler, but DX11 and its predecessors had one major issue – they required a lot of CPU time to decode the game’s instructions into a format that each CPU type could understand. As a result, under DX11 the PC could only  churn out around 10,000 draw calls per frame, and even then only by talented coders. Sure, there are tricks and techniques to help get around this issue, but the PC was still at a severe disadvantage compared to consoles. It’s no wonder that Assassin’s Creed Unity was and still is plagued by performance issues on the PC.

DirectX 12 promises to solve this issue once and for all. It’s known as a low-level API, which means it targets the hardware much more efficiently than earlier APIs. This in turn results in a HUGE increase in the number of draw calls possible. How much of an increase? Well, luckily we have 3DMark’s API Overhead Feature Test.

THE DX12 BENCHMARK – 3DMARK


According to Futuremark, creators of the API test in 3DMark, the test, “measures API performance by making a steadily increasing number of draw calls. The result of the test is the maximum number of draw calls per second achieved by each API before the frame rate drops below 30 fps.” It ensures the API is the bottleneck and not the GPU, by drawing a scene with a huge number of individual buildings that don’t have any lighting or detailed shader effects. Note that to run it you’ll need a PC with Windows 10, along with a DX12 compatible graphics card.We ran it on a machine with an Nvidia Geforce Gtx 970 and Intel i7 2700k processor, and as you can see from the following graph, the results were astonishing.

Yep, you’re looking at roughly a ten-fold increase in draw call performance. It’s no wonder Microsoft and game developers are so excited about the introduction of DirectX 12. However, we have to point out that these numbers are theoretical – graphics experts believe that in-game graphic fidelity will probably be able to double as a result of running DX12 on identical hardware. Still, that’s a huge leap in detail; imagine Star Wars Battlefront with visuals twice as detailed.

MORE CORES FOR YOU


One reason AMD is super excited about DirectX 12 is its improved ability to leverage multiple CPU cores. In the past, developers had to code their games specifically to take advantage of many CPU cores, but DirectX 12 takes much of that work out of the developer’s hands, instead intelligently dividing the game’s threads amongst as many cores as the CPU has. Early benchmarks have shown AMD’s octa-cored processors seeing massive improvements in performance under DX12, to the point where they might even have a healthy lead on Intel. So here’s to DX12, and a reboot of the CPU wars.