How hard can it be to determine whether a computer works as promised? Step one: turn it on. Step two: Try to solve some problems. If it doesn’t work, it doesn’t work. Right?
Things are never so simple in the real world, of course. And on the highly contested frontiers of quantum computing, matters are more complex still.
Many of you are no doubt familiar with the saga of the Canadian company D-Wave, which claims to have created the first commercially available quantum computer. Short version: In 2007, at a press conference in Mountain View, California, the CEO Geordie Rose switched on what he said was a 16-bit adiabatic quantum computer and invited the audience to watch it solve a Sudoku puzzle. The backlash from the academic quantum-computing research community was ferocious and immediate; experts lined up to criticize D-Wave’s bold claims. But D-Wave stuck around, and in time, they began attracting window shoppers. In 2009, they partnered with a research team at Google. In 2011, they announced the launch of D-Wave One, a $10 million machine they billed as the world’s first commercially available quantum computer. Lockheed Martin bought the company’s next machine, the 512-qubit D-Wave Two, and made it available to researchers outside the company.
All the while, D-Wave, its allies, and its critics have been sparring, publishing study after study that attempt to figure out what’s going on inside that black box.
The latest entry is out today in the new issue of Science. (The paper has been out in preprint since January.) A group of researchers from EHT Zurich, Google, Microsoft, the University of Southern California and the University of California at Santa Barbara gained access to Lockheed’s D-Wave machine and subjected it to a series of tests. They wanted to see whether the machine exhibited quantum speedup—that is, whether it was any faster than a classical computer running the same sorts of problems.
The first thing to know about the D-Wave Two is that it is not a general quantum computer—it’s what is called a quantum annealer, which should, in theory, be capable of solving certain types of optimization and sorting problems exponentially faster than a classical computer. To test it, the authors of the Science paper tested 1,000 randomly chosen cost-function problems on both the D-Wave Two and a classical annealer.
They found that, overall, the D-Wave did not exhibit quantum speedup on the set of problems used. The group phrased its findings very diplomatically. “This does not mean the device cannot have quantum speedup,” lead author Matthias Troyer says. “It just means that in the tests we conducted, [quantum speedup] was not there.”
Naturally, the people at D-Wave are sensitive about how this paper is being received. Colin Williams, director of business development for D-Wave, repeatedly emphasized to me in a phone interview that Troyer’s results only dealt with a specific class of problems. “The paper that’s coming out in Science today doesn’t say anything fundamental about the scaling of quantum annealing,” he says.
Moreover, he says, Troyer’s group chose an inappropriate set of problems to perform this test. “The problem ensemble was too easy,” he says. Given harder problems, he says, the D-Wave machine would have had a chance to distinguish itself. Williams points to a recent paper by Helmut Katzgraber for support, along with a blog post on benchmarking the D-Wave Two by the Google Quantum A.I. Lab team. Williams also says that problems of the sort outlined in a recent talk by Itay Hen of USC could point to better benchmarking problems.
Any outsider who wades into the D-Wave controversy probably wonders: Why, again, is it so hard to settle this debate? Troyer explained it well in an email, so I’ll quote him at length:
It is particularly difficult here because they [D-Wave] use qubits with very short coherence times of the order of a few nanoseconds, while the total time to perform of one annealing run is 20 microseconds. The qubits are thus coherent for only a fraction of the total time, and this raises the question whether they are "coherent enough" or "quantum enough" to show a quantum speedup.
The situation is made even more complex in that we do not know theoretically if even perfectly coherent qubits would offer any advantage (quantum speedup) for the type of optimization problems that there are interested in. Thus, if one is unlucky, even perfect qubits might not offer any advantage, while if one is lucky one might see quantum speedup even with imperfect qubits with short coherence times.
Because of both these reasons—unknown potential for quantum speedup and unknown effect of short coherence times—one needs to experimentally investigate the potential for quantum speedup. That's what we aimed for in our paper, where we developed a methodology to reliably detect any potential quantum speedup.
Colin Williams told me that “in the next six months or so” we would see new results showing that the D-Wave machine is indeed faster than a classical annealer, given the right problems. If D-Wave does show quantum speedup, it will be a big deal—a “huge advance,” Troyer says. Even so, quantum computing will still face a slog in the years ahead. “There are big challenges to making this a real device even if there is quantum speedup,” Troyer says. “To make it useful as a product, they have a long way to go after [quantum speedup] is shown.”
I mentioned Troyer’s paper the other evening to a physicist friend who happened to be in town. He’s finishing up a postdoc in Europe and thinking about what to do next. He just turned down a job doing quantum computing research for a big private player because the timing wasn’t right—he would have had to leave his postdoc early, and, he says, a job like that can wait a few months. There is plenty of time. It’s going to be a while before anyone has a practical quantum computer.
I suspect Troyer would agree.