Saturday 20 February 2016

Quantum Leap

Quantum Leap

Google and NASA's one small step for quantum computing could prove to be a giant leap for mankind.

In the last 45 years conventional computing has seen performance increases of around two million-fold, an astonishing rate that has changed the way we live. So when two organisations as respected as Google and NASA announce a 100 million-fold increase in computing performance, the potential outcome is nothing less than world-changing.


That incredible figure is the product of quantum computing, and of course it comes with one or two caveats. But with such big names behind it, we’re confident that we’re seeing a huge step forward in the viability of quantum computing to solve the world’s toughest problems.

The announcement didn’t come out of nowhere. Research on quantum computers has been running for decades, and Google and NASA’s own efforts in this particular case date back three years. It was then that they shelled out a serious chunk of money on the one and only commercially available quantum computer, from controversial pioneers D-Wave.

The intervening months went by with little more than the odd hint at what these organisations were doing with their new toy in their Quantum Artificial Intelligence Lab (QuAIL). Until finally, in December 2015, the 100m-fold claim was made. We’ve now talked to researchers at QuAIL to bring you the full story and its potentially huge impact.

To make any sense of that, though, we’ll need to turn the clock back a bit and lay some groundwork of our own.

Back to basics


The computer that prompted this investigation into quantum computing is a rather special type of quantum computer but, to set the ball rolling, we’ll look at the universal quantum computer. This sort of computer has most in common with today’s conventional computers, and has exercised the minds of many researchers over the past few decades.

Descriptions of quantum effects often start with a disclaimer along the lines of, ‘this is really weird stuff and you’re not going to understand it’. While there’s an element of truth in that, being less flippant we could say that in the quantum realm – that is, when we’re dealing with unimaginably tiny things like individual electrons or photons – common sense no longer applies, and things behave in strange ways.

Let’s take an example. In digital electronic circuits, things are in one of two states that represent the zeros and ones of binary arithmetic. Normally these states are represented by the properties of a bunch of electrons, but we can envisage that it might be possible to use the properties of a single electron to represent a zero or a one. An electron has a property referred to as ‘spin’, which can be ‘up’ or ‘down’. In reality, it’s not spin as we understand the word, and the properties of up and down don’t really have parallels in the non-quantum world, but the important point is that they could, potentially, represent zeros and ones. But here’s where it gets strange.

The spin of an electron can be caused to flip between states by applying energy while it’s in a magnetic field. A specific amount of energy is required to cause the spin to flip, and it’s what happens when insufficient energy is used that’s really interesting. Because spin can only ever be up or down, the spin can’t partially flip but, instead, it enters a so-called state of superposition. This is like being both up and down simultaneously or, in binary terms, 0 and 1 at the same time.

Neither one nor t'other


Frustratingly, a state of superposition can’t be observed. It’s not that nobody has figured out how to do it; rather it’s been proven that attempting to observe it will cause the state of superposition to be lost – decoherence, as it’s called. So, reading the state of an electron with a zero/one superposition will only give a value of zero or one. In an equal superposition – one in which zero and one have equal probabilities – if you carried out the experiment lots of times, half the time you’d get a zero and half the time you’d get a one.

It would be tempting to believe, therefore, that the state of superposition had never really been achieved and that the spin simply flipped half the time. However, scientists have shown without doubt that superposition does exist, and they’ve even harnessed the effect to carry out quantum computations.

It might seem inconceivable that this effect can be harnessed for computing if reading the answer destroys it, revealing just a single result. However, although billions upon billions of calculations might need to be done in some of the most difficult tasks carried out by today’s computers, often only a handful of answers are required. The strength of a quantum computer, therefore, is its speed in carrying out all those intermediate calculations, and it’s not hard to see how it could achieve that.

We’ve seen that a single bit – or a qubit, as a bit in a quantum computer is called – can hold the values of zero and one simultaneously. Just as a register of eight bits in a conventional computer can hold any value between 00000000 (0) and 11111111 (255), in a quantum computer, a register of eight qubits can hold all 256 values at the same time.

More importantly, any calculation performed on that register will result in 256 parallel computations also occurring. This parallelism increases exponentially so, by the time we get to the equivalent of today’s 64-bit processors, a quantum computer could perform 18 quintillion – that’s 18 billion billion – parallel calculations at a single stroke.

A quantum computer that worked in much the same way as the processor in your PC by executing logic and arithmetic instructions sequentially, but using qubits instead of bits, would be a universal quantum computer. Like today’s computers, it would be able to carry out pretty much any logical problem but, so long as the registers were as large as those in today’s processors, it would do so immeasurably faster, at least for some applications.

This has been the goal of most quantum computing researchers since the 1990s, but progress has been slow. The difficulty lies in the fact that, just as any attempt at reading something in a state of superposition causes decoherence, so does any unintentional interaction with the environment.

To make things more difficult, a quantum computer depends on the qubits in a register being ‘entangled’. This peculiar quantum effect means that none of the qubits can be thought of individually. If this effect were scaled up and we had two entangled coins that were flipped at the same time, either both would show heads or both would show tails, but never one of each.

However, because of that property, maintaining superposition becomes increasingly difficult as the number of qubits increases; if one of them decoheres, all will suffer that same fate. Scientists have tended to use esoteric means of preventing interaction with the environment, for example suspending single particles in free space using electric fields. In the early days, they managed to ramp up the number of qubits from two to around a dozen, but increases to the sort of figure needed to realise the true potential of quantum computing have proved elusive – or so it seemed.

It’s important to recognise that a lot more than 32 or 64 qubits will be needed to produce a practical quantum processor, just as a conventional 64-bit processor contains hundreds of 64-bit wide registers, even if we exclude the cache. But while the headline figures may not have increased much, behind the scenes developments have brought the prospect of universal quantum computing a lot closer, as a raft of recent announcements have shown.

The D-Wave story


We’ll come back to these developments later to concentrate on the key commercial player in quantum computing today, D-Wave Systems in Canada. D-Wave first introduced a 28-qubit computer in 2007 and the company has increased this, in leaps and bounds, to the 1,097-qubit D-Wave 2X that was introduced last year.

Given the huge discrepancy between the figures reported in scientific papers and those claimed by D-Wave, it’s hardly surprising that the company has had its critics. Many quantum computing experts were vociferous in their dismissal of D-Wave computers, claiming they didn’t really employ quantum effects, with the most outspoken accusing the company of deception.

While the world of academia had its doubters, NASA and Google took D-Wave’s claim seriously. Or at least seriously enough to shell out $10 million on a 512-qubit D-Wave Two, later upgraded to a 1,097-qubit D-Wave 2X system. Together with the Universities Space Research Association, they set up QuAIL at NASA’s Ames Research Center in Moffett Field, California.

We didn’t hear a lot about the initiative, however, until last December, when NASA invited the world’s press to see its acquisition and Google promised a “watershed announcement”.

Put simply, NASA and Google revealed that the D-Wave 2X had achieved a 100 million-fold speed improvement over a single Intel Xeon E5-1650 core running at 3.2GHz. This implied that the machine genuinely does employ quantum effects after all. While their comparison only involved a single core, it seems to be an impressive result, as a few simple calculations reveal.

Today’s fastest supercomputer, China’s Tianhe-2, harnesses the power of over three million cores. However, it cost $390 million to build and you’d need 32 of them to get enough cores to equal the performance of the D-Wave 2X. The total cost, therefore, would be in the region of $12 billion, compared to merely $10 million for the D-Wave system.

Some experts are still taking these results with a pinch of salt, but there’s one vital point we need to make about D-Wave’s approach to quantum computing: its machines are not universal quantum computers. Instead, they support something called a quantum annealing architecture, which is designed to solve a particular type of problem that proves taxing for conventional computers.

Approach shot


Given that the evidence points to the existence of a working quantum annealing computer while the universal equivalent is still on the drawing board, let’s look at how the two approaches compare – to find out whether this different approach represents the future of quantum computing, or whether it’s just a stop-gap measure until something better comes along.

We spoke to NASA’s Davide Venturelli, currently in charge of surveying the investigations performed at QuAIL, about the sorts of problems the D-Wave system can be used to solve.

The application that achieved that 100 million-fold speed increase was a specially written benchmark – software designed specifically to test the hardware. But according to Venturelli, many important real-world applications have similar properties. “The features of this application appear in almost all large-scale useful ‘hard’ problems,” he said.

Google has given some indication of what these applications might be by referring to machine learning. The company said: “Machine learning is all about building better models of the world to make more accurate predictions. If we want to cure diseases, we need better models of how they develop. If we want to create effective environmental policies, we need better models of what’s happening to our climate. And if we want to build a more useful search engine, we need to better understand spoken questions and what’s on the web so you get the best answer.”

Better online searching will benefit us all, but quantum applications tend not to be everyday tasks. We asked Venturelli if he ever envisaged a day when we might have this sort of hardware on our desks. “I don’t envisage such a day before 2050,” he said, “but I think we will have access to quantum computing in the Cloud as a SaaS (software as a service) much before that.”

Given that, for many years, all the talk about quantum computers revolved around universal quantum computers but D-Wave got to market first with a very different quantum architecture, we were keen to know whether the universal approach has been marginalised or whether there’s a place for both. Alternatively, perhaps, when the universal quantum computing comes along, might it mark the end for quantum annealing, just as the digital computer ended up marginalising and then replacing the analogue computer.

We put these points to Venturelli, who thinks that quantum annealing might represent a stepping stone. “The approach taken by D-Wave is one of the many possible routes to a universal quantum computer,” he said. “The importance of universal quantum computation has not diminished, but the focus to near-term approaches is justified because, before D-Wave, we were not really sure we could have moderate scale devices that exploit quantum effects in a useful way for computation so soon. I have a feeling that quantum annealing will evolve to a universal quantum computing platform somehow, but it is hard to tell.”

Universal deveopments


While early experiments into quantum computing tended to be carried out in the world of academia, several well-known computer companies are now taking an interest, and a few have nailed their colours to the mast in predicting future timescales. Just a few months ago Intel teamed up with the Delft University of Technology (TU Delft) and TNO, the Dutch Organization for Applied Research, in a $50 million project that aims to accelerate advancements in quantum computing. Microsoft, meanwhile, is also active in the field and has indicated that a practical quantum computer might be only 10 years away. IBM has been making important strides in quantum error detection and correction, even though it means that several physical qubits have to be used to create one logical qubit. And then there’s Google. Quite separately from its involvement with NASA in examining the potential of D-Wave’s computers, Google has joined forces with researchers from the University of California, Santa Barbara, to develop its own quantum computing hardware.

While all of these various research initiatives represent the whole gamut of quantum computing technologies, some of them aim to deliver the holy grail of universal quantum computing. We talked to Professor Lieven Vandersypen at TU Delft, who referred to several recent developments that seem to make the universal quantum computer more likely. “There were two advances of the last few years that have made me much more optimistic than a few years back,” he told us.

“First, the error rate we can tolerate for quantum error correction has jumped significantly; and second, coherence times for solid-state qubits have gone up by orders of magnitude by new materials developments.” However, he did have one important proviso. “If people say that a practical machine may be possible in a dozen years or so, they also count on progress in algorithms for ‘few-qubit’ applications; ie, on finding ways to make use of hundreds of qubits rather than many millions. I think there is good reason for optimism here.”

According to Prof Vandersypen, there’s still a lot to be done. “A number of scientific challenges remain to be addressed, every step we take has never been taken before and it takes time to figure out the right ways to do things. This includes making large numbers of qubits that are all identical, developing control electronics – possibly partly cryogenic – for addressing these large numbers of qubits, coming up with suitable interfaces and interconnects to pass on all the measurement and control signals, and designing a full quantum computer architecture. QuTech (a collaboration between TU Delft and TNO) is working on all these aspects, together with Intel and others.

So far, so good, but what could a universal quantum computer be used for? After all, the usual answer is factoring large numbers and, in so doing, enabling all today’s ciphers to be cracked in an instant. But potentially derailing the online security on which commerce is so reliant sounds more like a liability than a benefit. We asked whether this architecture truly will be universal and, if so, what the real killer apps will be.

According to Prof Vandersypen, it will be genuinely universal. “By universal we mean that it can be programmed to solve any problem for which there exists an algorithm – quantum or classical,” he explained. He then went on to refer to likely applications. “What motivates me most are the quantum algorithms achieving exponential speed-up for simulating molecules and materials, which may contribute to the development of new drugs, to designing materials for transporting or storing electricity without losses, and so forth”.

He was even optimistic that quantum computing will eventually go mainstream. “Currently known applications are not for immediate consumer use, but who knows how the field will develop? I am hopeful that many more applications will be discovered, including mainstream applications.”

The elusive quantum future


So it seems that an application-specific quantum computer is now with us – although some experts remain to be convinced – but the universal quantum computer still hasn’t escaped from the research labs. After decades of promise, though, many researchers believe that it will break out in the next 20 years, and possibly a lot sooner than that. While you might never have one on your desk, huge benefits to humanity are expected.

Intel and Microsoft have ongoing quantum computing research initiatives, and we might reasonably expect that these major players in the world of classical computers might lead the way with the quantum alternative. Both companies declined our invitation to suggest what the future holds. Does this suggest that the mooted 10- to 12-year timescales are over-optimistic or that they’re just playing it safe? We’d have to conclude that the future of quantum computers is as hard to pin down as an electron in superposition.