Tuesday 28 July 2015

The High-Bandwidth Bomb

The High-Bandwidth Bomb

The PC platform, and motherboards in particular, are about to drop a bomb on your desktop

In the future, motherboards will be all about bandwidth. Actually, they already are. That’s because the big news for the PC is a whole bunch of new interconnects. And they’re all about boosting bandwidth. They’re also borderline baffling. Firstly, there’s the crazy nomenclature. Whoever thought that "SFF-8639" was a keeper needs to be strung up with SATA cables. More on that later. The confusing naming, that is, not death by middling-bandwith cable connects.


Then there’s the fusing of multiple standards into one, plus the replacement of others with multiple options. Nightmare.

Maybe it was always this way. Perhaps our spectacles are growing ever more rose-tinted. Until recently, it was just PCI Express for graphics, SATA for hard drives, and USB for the rest. Wasn’t it?

OK, we had to keep up with little upgrades, but now? Now there’s M.2, which is a sort of PCI Express for SSDs, not forgetting that it’ll need to be of the NVMe variety to really deliver next-gen performance. AHCI won’t cut it. Unless it’s SATA Express, which is a bit like a fusion of PCI Express and SATA. Except nobody is using it, and it looked like SFF-8639 will be the thing, until they re-named it U.2.

Meanwhile, USB 3.1 has been tweaked into USB 3.1 and USB Type C, at the same time as assimilating Thunderbolt, which may not live on as a standalone, er, standard. Yup, it’s a cluttered-up mess.

This month, then, we have an allencompassing guide to these new standards—what they all mean, how they work, whether you’ll want ’em, the works. Then we’ll round up the latest mobos and sniff around the actual implementation of these new standards, even though not all are fully available, and, in some cases, compatible devices barely exist. Confused enough? Let’s get cracking.

Let’s start with USB. It’s the mother of all modern interconnects. To say that USB has been a huge success is something of an understatement, to say the least. USB has become the very definition, the Platonic form found hanging in the intergalatic ether, of utter, crushing ubiquity.

What began in 1997 as something to make it easier to plug stuff into PCs has ballooned into the default wired interface for almost anything digital. Charging, connecting, communicating. If it’s done over a wire, it’s probably USB.

USB has also historically majored in maximum backward compatibility, both in terms of the physical format and the digital signaling. It’s been revised several times in the name of boosting bandwith from the original 1.0 spec to today’s 3.0. That’s involved a journey from 12Mb/s at launch, through 480Mb/s for USB 2.0, 5Gb/s for 3.0, and latterly, 10Gb/s for USB 3.1.

What hasn’t changed is the familiar rectangular socket. Throughout the bandwidth bump cycles, that has remained. Indeed, older devices could plug into revised sockets and function fine, though obviously the lowest common denominator prevails—device or host will default to the slower of the two.

Of course, USB has acquired a few frills over the years. Mini and micro sockets for smaller portable devices have appeared, and the standard rectangle "A" connector has also been accompanied by the squarer "B" interface, the latter being familiar as the interface used most commonly to hook up multi-port USB hubs.

When USB 3.0 appeared, the standard got its first electrical upgrade, too. The pins and wires went from four plus a shield, to nine plus shield. The plastic internals of the female sockets were also colored blue to help aid identification. But the new physical interface was cleverly designed, so backward compatibility was retained.

With the introduction of USB Type C, however, the near 20-year run of backward compatibility will be broken. But for some pretty decent reasons. First up is ye olde bandwidth. Developed at around the same time as USB 3.1, Type C supports all 10Gb/s of USB 3.1. Yay.

The other raison d’être is simplicity. Like Apple’s Lightning and also the Thunderbolt interface, it’s reversible. In other words, you can whack it in without worrying whether you’ve got it the right way up. That sounds like a minor convenience. But if you’re fumbling around behind a PC in the dark, for instance, it’s a fairly significant boon to be able to just ram it in.

LORD OF THE INTERFACES


Even better, Type C is good for both devices and hosts, putting an end to the Type A and Type B dichotomy. It’s also miniscule in terms of its physical proportions; it's much, much smaller than the ubiquitous Type A rectangle. So, it’s as good for desktop gear as it is for mobile hardware. Think of it as the One Ring of USB. It’ll rule them all. Actually, a bit like the One Ring, USB Type C has a slight problem with megalomania. It wants to rule more than just USB. It wants to own other interfaces, too. Perhaps that’s not very fair. What’s really happened is that Intel’s third revision of its Thunderbolt interface includes a USB 3.1 controller and is compatible with the Type C connector.

That’s an intriguing proposition because it means you can have all the benefits of USB and Thunderbolt in one interface. From USB, you get the broad compatibility and some decent speed. From Thunderbolt, you get some seriously sexy new goodies.

The first is DisplayPort support. That means you can hook up all kinds of cuttingedge monitors and displays to what is a general-purpose interface. Then there’s PCI Express support, which is handy for external drives, but also opens up options for running external graphics cards.

Thunderbolt 3 also ups its bandwidth ante to fully 40Gb/s. Consider, if you will, DisplayPort, USB 3, and PCI Express over a single Thunderbolt port. Nice. Beyond Apple’s MacBooks, Thunderbolt hasn’t gained much traction to date. But aligning it with USB Type C will almost certainly change that.

Of course, much of this technology hasn’t reached existing motherboards, or PCs generally. But there have been plenty of changes when it comes to storage interconnects. Very soon after the introduction of the first solid-state drives, it became clear that the SATA standard wasn’t going to be good enough.

At first, it was a simple bandwidth issue. In practice, the fastest SATA 6Gb/s iteration tops out at around 550MB/s. That's nice compared to magnetic drives of yesteryear, but pedestrian in an age of solid-state componentry and GB/s of system and graphics memory bandwidth.

More recently, it’s become clear that the AHCI protocol that PCs use to control hard drives is suboptimal for SSDs. The solution, as it turns out, is PCI Express. Unfortunately, the solution is also more complicated than that. Initially, it seemed like a new standard that combined SATA physical interconnect with the modular bandwidth of PCI Express was the future for desktops and 2.5-inch drives. This is, or perhaps was, SATA Express.

HIGH-BANDWIDTH INTERCONNECTS IN FULL

COMPETING FOR LANES


In parallel, another new standard was born. It’s known as M.2 and involves compact drives based on bare circuit boards, plugging directly into slots rather than via cables. M.2 is also based on PCI Express and looked ideal for both portable PCs and micro systems.

At the same time, a new control protocol optimized for solid-state storage — NVMe — popped up, and the future of PC storage seemed to make sense. And then it didn’t. Firstly, while SATA Express was widely adopted by motherboard makers, actual drives failed to materialize.

Meanwhile, although M.2 drives were launched and motherboards had slots to accept them, those drives lacked NVMe support, so the true promise of PCI Express storage wasn’t realized. What’s more, as we got to grips with the core concept of PCI Express storage, it became obvious that current Intel platforms aren’t really built with it in mind.

That’s because the native PCI Express connectivity is built into the CPU itself. For Core i5 and Core i7 CPUs on the LGA1150 socket, that connectivity is limited to a single port of single lanes. In an ideal world, all 16 of those lanes will be dedicated to graphics. Take just one away for use with an SSD, and the graphics subsystem will drop down to eight lanes.

Of course, what’s left of the external chipset, the PCH, does provide another eight lanes. But these are slower 2.0-spec lanes rather than the CPU’s 3.0 lanes. More to the point, the PCH connects to the CPU via a DMI 2.0 bus that shares its 20Gb/s across all subsystems. So, any PCI Express drives could be battling with the likes of USB devices for bandwidth.

In an ideal world, the CPU itself would have a few spare lanes to hook up to SSDs. That’s exactly what will happen with Intel’s next-gen Skylake CPUs. They get 20 native PCIe lanes, and so four to use for storage. The final part of the PCIe storage puzzle is U.2. For more on that, point your peepers at “U.2 comes to the PC,” to the right.

Put all that together and the PCs of the future begin to take shape. Imagine, perhaps, a PC where all peripherals, including your ultra high-res monitor, are daisy-chained off a single port. Meanwhile, you’ll have storage that cranks out the sort of GB/s speeds we used to associate with RAM. We may not be there yet, but at least you now know what’s coming.