August 27, 2013 | 12

Carnegie Mellon's Public Communication for Researchers program, which trains graduate students in a variety of public communication skills. Follow on Twitter @jdunietz.

Jesse Dunietz is a Ph.D. student in computer science at Carnegie Mellon University. He studies natural language processing, focusing on building technologies that enable computers to interpret metaphors in useful and intelligent ways. Originally from New Jersey, he received his bachelor's degree in computer science from MIT. He has taken his computer science skills into a wide variety of contexts, including software companies, charitable organizations, and physics laboratories. Jesse is one of the founders of
The word “quantum” imbues any topic with instant mystique. Unfortunately, it often doubles as a “keep out” sign – a signal that an impenetrable quagmire of math and physics awaits anyone foolish enough to peer behind the label. “Quantum computing” is no exception: an air of inscrutable mystery surrounded the recent flurry of Internet stories about quantum computers. These machines exploit the mysterious phenomena of quantum mechanics – the physics of the ultra-tiny – to solve problems at speeds that leave traditional computers in the dust.

The latest buzz surrounded the announcement of a new lab where the Universities Space Research Association, Google, and NASA Ames will operate a quantum computer manufactured by D-Wave, a Canadian computer company. The announcement triggered a wave of breathless stories about the near-magical capabilities of such computers. All this coverage has only made quantum computing seem more impenetrable: what makes a computer “quantum?” Why do we want a quantum computer? And has D-Wave actually built one that lives up to the excitement? To address these questions, we’ll need to start with some background on how computers represent information.

**Classical Information vs. Quantum Information**

Few people can claim to have invented a new scientific field. Fewer still can claim to have invented several. Possibly only Claude Shannon could claim to have done so by the age of 32.

Shannon first became an academic celebrity when, in his 1937 master’s thesis, he laid the mathematical foundations for digital circuits, essentially inventing the digital computer. After a few years’ hiatus, he returned to the spotlight in 1948 with a paper modestly titled “A Mathematical Theory of Communication.” In it, Shannon described a model of communication that still defines how computer scientists think about information today.

Shannon was struggling to describe mathematically the concept of a sending a message – a piece of information transmitted from Point A to Point B – without reference to any particular transmission technology. One of his chief insights was that any piece of information can be encoded in a simple scheme with just two symbols: the humble 0 and 1. Each 0 and 1 Shannon called a *bit* – a fundamental unit of information.

To this day, the bit is the currency of information inside every computer you own. A traditional computer needs each bit to be unambiguously either a 0 or a 1 at every step in a calculation. An enormous amount of engineering effort goes into enforcing this. In fact, the time it takes for wavering electrical signals to settle down to a clear 0 or 1 is largely what determines the speed of a chip.

Of course, a bit on its own, limited to just two possible messages, isn’t terribly useful. Bits are mainly interesting in crowds. If we have two bits, each can store either a 0 or a 1, giving four possible configurations the pair can store (00, 01, 10, and 11). Each configuration, or *state*, can be interpreted as a different message (say, the numbers 0, 1, 2, and 3), so we can now represent four distinct messages. The number of possible states – and thus the amount of information we can store – continues to double with each additional bit.

Ever since that paper, the bit has driven the development of computer technology. A few decades later, though, several physicists and engineers began to ponder more exotic ways of representing information. In particular, they looked to quantum mechanics: could the peculiar physics that apply at tiny scales change the game of storing information?

Enter the *qubit*, the quantum mechanical analogue of the bit. Originally a mere theoretical abstraction, the qubit is a considerably stranger beast than its classical cousin. Like a bit, a qubit is a unit of information. Also like a bit, each qubit can be in one of two physical states, which we’ll again call 0 and 1. But there the similarities end: due to the vagaries of quantum mechanics, a qubit can exist in multiple states simultaneously. Much like Schrödinger’s famous cat, any given qubit can hold a 0 and a 1 *at the same time*. It can be, say, 25% 0 and 75% 1. Or it can be split 50/50, or 63.7/36.3. The possibilities are endless.

If this seems counterintuitive to the point of impossibility, you can take solace in knowing that the great physicist Richard Feynman thought so too. He once famously remarked, “I think I can safely say that nobody understands quantum mechanics.” Unfortunately, it only gets weirder from here.

A qubit’s schizophrenic condition, called a *superposition*, persists until the qubit is *measured* – that is, until it interacts with another object. It then chooses a single state – a 0 or a 1 – and sticks with it. Each state’s probability of being chosen is governed by the aforementioned percentages. So if we measure 100 independent qubits, each of which is initially 30% 0 and 70% 1, about 30% of them will come up as 0’s and 70% will come up as 1’s.

The complexity grows as you add more qubits. With 2 qubits, as with 2 bits, you can store any number from 0 to 3. But unlike the classical pair of bits, our qubit duo can also store any mixture of these four numbers.

Strangely, those percentages aren’t built into the qubit – they depend on how you measure it. Say you measure a beam of qubit-carrying photons at one angle and find they’re split 50/50. If you’d measured the same beam at another angle, you’d have gotten only 0’s. And even if a particular qubit has settled on being a 1 when measured at one angle, it may switch to being a 0 if measured at a different angle. Qubits, like bits, ultimately give you a yes-or-no answer, but which one you get isn’t fixed until you ask, and even then it depends on how you ask the question, and even *then *the answer will change if you subsequently ask the question differently. It’s like looking at one of those signs where the image changes depending on where you see it from – except the sign occasionally decides at random to show you the view from the other side.

As if all this weren’t enough, there’s yet another wrinkle: those percentages aren’t strictly percentages. If one rainstorm has a 40% chance of hitting Pittsburgh today and another shows up with a 20% chance, the probability of rain in Pittsburgh must go up (in this case, to 52%). In quantum mechanics, though, the things we’ve been calling probabilities can be negative – they can cancel out. (Technically, they’re complex numbers, not probabilities.) If a qubit is 40% 1, you can add 20% more 1 such that the final probability of getting a 1 is about 3.5%.

Every aspect of these behaviors is deeply unintuitive, but perfectly in line with what quantum mechanics dictates for certain physical systems. The physics behind this gets rather involved, so let’s just take it as a given that qubits exist. Why should we care? What do you do with a qubit?

Well, you build a quantum computer, of course – a machine that manipulates qubits to perform calculations. Much of the media hype has suggested that quantum computers are faster than classical computers because they calculate all answers simultaneously. In a certain sense, that’s true: qubits can store a superposition, and if you perform a calculation on the superposition, you’ll get a new superposition of all the possible results. But at the end of the day, you still have to ask the qubits what answer they hold. At that moment of truth, they won’t give you a list of answers – they’ll choose just one at random from the many they’re storing. Fast but random is hardly better than slow but correct.

What’s the secret, then? How could messy, unpredictable qubits possibly be an improvement over good old-fashioned bits?

**From Quantum Information to Quantum Algorithms**

Last month, I struck out to the back of my new apartment complex on a bold and daring mission: to do my laundry. Squashing my laundry basket precariously against the wall, I paused for an uncomfortably long time in front of the laundry room, fumbling with key after key to find the one that would open the door. For my small keychain, this was just a minor inconvenience. But what if I had 100 keys? Or 1,000? At some point, trying all the possibilities becomes impractical.

The same problem arises for computers. Word processors, for instance, are often asked to search thousands of lines for a specific word. Thousands of lines don’t faze a computer, but searching trillions is likely impractical.

Such intractable searches were on the mind of Lev Grover in 1995. Grover was a researcher at Bell Labs, a powerhouse of innovations in computing technology. Inspired by the lab’s recent breakthroughs in quantum computing, Grover realized that this new technique offered a way out of scanning every word. A quantum computer will indeed give a random answer from the possibilities its qubits contain, but the probability of each answer need not be the same. The trick, then, is to rig the game such that the correct answer is more likely than the others.

To see how, let’s return to the keys. Let’s imagine that our thousand keys are strung out on a long rod, neatly lined up. Let’s also imagine that the key we seek is made of iron, which is magnetic, whereas the others are all brass, which is not. We can’t afford to test one key at a time – but what if we could act on all of them at once?

Let’s stand our key-laden rod on its end, line up a bar magnet with it, and wave the magnet in a circle around the rod. While we’re doing that, we’ll jiggle the rod to let the keys shift. The iron key will swivel around the rod to follow the magnet, making it slightly easier to pick out. Still can’t tell which one it is? No problem; we’ll just wave the magnet again to make the key stick out farther! The rest of the keys, meanwhile, will shift randomly as we jiggle. After a few repetitions, the uninteresting keys will end up approximately where they started, and the correct one will stick out like an albino squirrel on asphalt.

The same approach drives Grover’s method for searching for a word. First, we set up a bunch of qubits (collectively, a *register*) to store a superposition of all possible locations where the word we want could be (1 for first in the list, 2 for second, etc.). Any one of these could be the correct answer, just as any key on the rod could be correct. We then follow a recipe from Grover for tweaking the entire superposition (without measuring it), much as we waved the magnet around the entire rod at once. Grover’s recipe adds probability to both the right and wrong answers, just as our jiggle-and-wave operation shifts all of the keys slightly.

The cleverness lies in one operation – one *gate*, in the parlance – of the several that make up the tweak. Given any single number, this gate would make the number’s “probability” negative if it’s the correct answer, or leave it unchanged if it’s not (compare: given any individual key, the magnet will attract or ignore it). When you operate on a superposition, though, you get back a superposition of all the possible results. When we apply this gate to our superposition, then, it will act differently on the different elements of the superposition. This allows Grover’s tweak to treat right and wrong answers differently: for the wrong ones, the additional probability *cancels *with what was already there, just as the random motions of each non-iron key average out to nothing. The net result is that we boost the probability of the correct answer. After a few boosts, we can measure the qubits, confident that we’ll probably get the right answer.

Grover’s algorithm perfectly exemplifies the strategy of quantum computation. The trick is to operate on the entire superposition simultaneously, gaming the probabilities so that the right answer comes out on top. Although you may need to repeat the operation many times, some computations still become exponentially faster than on a classical machine.

Much of the excitement about quantum computing stems from one application in particular: factoring numbers. Factoring 21 into 3 times 7 is easy, but for decades it was assumed that finding the factors of large numbers takes too long to be practical. So strong was this assumption that most encryption technologies, including those underlying e-commerce, are powered by it. But Peter Shor, a mathematician at MIT, wasn’t so sure. His doubt proved well-placed: in 1995, Shor electrified the world of computer science with a fast quantum algorithm for factoring large numbers. Shor’s algorithm could shatter the world of Internet security.

The devil, as usual, is in the details. Theoretically, a quantum computer could provide massive speedups for factoring, searching databases, biochemistry simulations, and other challenging problems. Practically, however, everyone who tried to build such a computer – including D-Wave – kept running into technical barriers. It is to those barriers that we turn next.

**A Tangle of Qubits**

There’s a fly in the ointment that we’ve overlooked. We’ve been assuming that when you measure a register of qubits, all those qubits decide together what answer to reveal. In reality, each qubit could make that decision independently. If that happens, their answer will be as informative as an image where every pixel is from a different search result. Similarly, tweaks to the superposition need to modify probabilities of entire numbers, not just the individual 0’s and 1’s that comprise the numbers.

The solution to this problem is *entanglement*. “Entanglement” describes a particular kind of superposition in a quantum register – one where the individual qubits’ states are bound up with each other. An entangled qubit still behaves probabilistically, but measuring it influences the choices of all the qubits entangled with it.

Entanglement is the Achilles’ heel of quantum computing, both in its scarcity and in its abundance. Prodding a large herd of qubits into entanglement gets tricky, so any practical quantum computer must contend with *insufficient *entanglement. On the other hand, it’s all too easy for qubits to get entangled with objects outside the qubit register. When this happens, they usually stop paying attention to each other, tying their fates to the external environment instead. This phenomenon, called *decoherence*, is utterly crippling to a quantum computer. Reams of research are devoted to counteracting or delaying it, but thus far the problem has stubbornly resisted these efforts.

**A Maverick Company**

And now, at last, we arrive at D-Wave. When it was first founded in 1999, the computer company started exploring the same types of quantum computers as most researchers. But within four years, faced with the insurmountable problem of decoherence, founder Geordie Rose ditched the goal of a gate-based quantum computer. Instead, Rose decided to shoot for a less well-studied type of device, known as an *adiabatic quantum computer* (AQC).

Like its gate-based counterpart, an AQC predisposes its qubits to yield the correct answer to a problem. Instead of repeatedly boosting the correct answer in a superposition, though, an AQC tries to avoid superpositions altogether. The qubits start off unambiguously holding the right answer to a trivial question – a non-superposition. They are then repeatedly measured, but each measurement asks the qubits a slightly different question. On each step, the question asked is brought a little closer to the question of interest, until the final measurement actually asks the question we’re trying to answer.

Normally, changing the question would be like changing the angle you’re measuring a photon-qubit at: it would make the final measurement a measurement of a superposition, so the answers would become unpredictable. If we change the question slowly enough, though, then when the second measurement pulls the qubits out of the slight superposition introduced by the change in the question, the most likely result is the correct answer to the second question. Repeatedly changing the question in this manner ultimately leaves the qubits holding the correct answer to the final question.

D-Wave’s AQC design is more resilient to decoherence. Because its goal is to *avoid *entering a superposition, it doesn’t need to protect a delicate qubit state so carefully from outside interference. One thing that does *not *come with the bargain is an exemption from entanglement, which is still required for an AQC to impose on the qubits the constraints of the question to be answered.

Since its about-face, D-Wave has become a lightning rod for controversy. The criticism has largely concerned the company’s public claims. In 2007, D-Wave hosted a flashy public demonstration of its first prototype, a 16-qubit chip dubbed Orion. Orion seemed to be miles ahead of any other quantum computing technology, but experts in the field promptly dismissed the demonstration as an uninformative PR stunt. That didn’t fluster D-Wave, which in 2011 went on to announce the 128-qubit D-Wave One. Academics remained largely unimpressed, most notably Scott Aaronson, who appointed himself “Chief D-Wave Skeptic.” Nonetheless, Lockheed Martin found the company’s case compelling enough to purchase the machine.

The past few months have seen yet another round of back-and-forth. D-Wave announced a more powerful processor, the D-Wave Two, with 512 qubits; researchers disputed its value. In May, NASA and Google announced a new “Quantum Artificial Intelligence Lab” that would operate a D-Wave Two; researchers remained skeptical. Just a few weeks ago, news organizations gushed about new research papers suggesting the D-Wave chip is the real deal; some researchers dismissed those, too.

Between the media clamor and the general impenetrability of quantum computing, it’s difficult to separate the facts from the posturing, misunderstandings, and wishful thinking. With the background we’ve covered on quantum computing, though, we’re now in a good position to give it a try.

**What’s a D-Wave Worth?**

Ultimately, D-Wave’s offerings raise two main questions: do they provide any advantages over their classical competition, and are they really quantum computers? These questions overlap, but they’re not identical. If D-Wave’s chips do speed up some computations, then whether they’re quantum may not matter practically. On the other hand, if they really are quantum computers, then they represent a significant advance, regardless of whether they’re ready for prime time.

Within the past few months, one research paper has come out addressing each question. The papers’ answers, according to many press articles, were “yes” and “yes”: D-Wave’s computer is demonstrably an AQC, and is sometimes thousands of times faster than standard computers. That sounds like quite the win for D-Wave! Some digging, however, reveals a more nuanced story.

Let’s first tackle the question of speedup. In May, Amherst College professor Catherine McGeoch published the first hard data on the speed of a D-Wave chip. Her team compared the D-Wave Two against several standard software packages on problems of the type the D-Wave attempts to solve. (The chip is not a fully general AQC – it is designed to solve only problems like Sudoku puzzles, where you need to find the best solution for a set of constraints.) The D-Wave did indeed find correct solutions to some problems 3600 times faster than the best of the software packages. This was widely cited as evidence that D-Wave had a system that was, in the words of Rose, “better at something than any other option available.”

But the story doesn’t end there. A few weeks later, other researchers squeezed more speed out of the off-the-shelf software, pushing it to just 14 times slower than the D-Wave. Some experts, meanwhile, complained that the deck was stacked: the software packages are designed to solve a wider variety of problems than the D-Wave chip. It’s no surprise that they couldn’t compete with a processor tailored to the problems being tested; 14 times slower is pretty impressive under the circumstances! In fact, even before McGeoch’s paper was published, Matthias Troyer, a physics professor at ETH Zürich, had beaten D-Wave with an equally specialized program running on a conventional laptop. Based on the data published so far, then, it seems premature to claim that D-Wave technology currently offers serious speedup. McGeoch herself, in a comment on Aaronson’s blog, wrote, “Our tests were never meant to be used to compare platform speeds, and it is wrong to use our data to support arguments either way.”

Arguably, though, speedup is the less interesting question. Even if a D-Wave chip does beat other technologies, the advantage will only last if it’s also genuinely quantum. Otherwise, D-Wave is basically selling an overpriced desktop computer: *all *classical computers are roughly equivalent, so anything within the reach of a non-quantum D-Wave could eventually be implemented on a desktop.

On to the essential issue, then: has D-Wave has built an AQC? Critics claimed for years that simpler phenomena than quantum mechanics could explain the D-Wave chips’ behavior. In particular, they alleged that the company had offered no evidence of entanglement in their qubits – a necessary component even for an AQC. That challenge was countered in April by a University of Southern California team led by Sergio Boixo. Boixo’s team compared the behavior of a D-Wave chip against two different simulations: one of the quantum phenomenon D-Wave claims is happening, and the other of the simpler non-quantum possibility. Like the quantum simulation, the D-Wave chip found most problems either extremely difficult or extremely easy. The classical version, meanwhile, had a much more even distribution of difficulty. As even Chief Skeptic Aaronson acknowledges, this is a strong indication that the chip includes quantum entanglement. This makes D-Wave the first to generate entanglement among such a large number of qubits, a significant scientific achievement.

This brings us, of course, to our final question: should everyone be rushing to buy a D-Wave? If you want to solve a tricky problem tomorrow, the evidence published to date suggests that you’re probably better off just buying a laptop and hiring Matthias Troyer. Rumors of D-Wave’s speedup success have, unfortunately, been greatly exaggerated.

But one exaggerated claim does not mean we should dismiss the technology. For one thing, one experiment that failed to demonstrate speedup need not be the final word. More importantly, though, D-Wave genuinely does appear to have built a system of entangled qubits, so the possibility for a computing revolution is there. Perhaps the best way to think of the D-Wave machine is as an experiment, on the part of both the company itself and its customers. The device could still turn out to be a dud, but if the D-Wave team is able to prove its worth – as evidence is slowly accumulating that they will be – the technological benefits could be immeasurable.

Quantum computing is a field with enormous challenges, but also enormous promise. While practically useful quantum computers may still be more aspiration than reality, it will be fascinating to watch as attempts to bridge that gap, both in academia and in industry, continue to unfold.

*The views expressed in this article are those of the author alone, and do not necessarily reflect the views of his company.*

Add a Comment

You must sign in or register as a ScientificAmerican.com member to submit a comment.

There’s more dangling questions than I can possibly ask, so here’s one – for example. The article stated:

Link to this“The D-Wave did indeed find correct solutions to some problems 3600 times faster than the best of the software packages.”

It might be explained that the speed of any software solution is highly dependent not only on the software that determines what calculations will be executed in determining the solution to a problem, but to potentially an even greater extent – what computer hardware executes the computations!

Stating that the D-Wave hardware/software combination was ‘n’ time faster than some (commercial) software package is _utterly_ meaningless. Was the software package run on a typical laptop, a smartphone or a massively parallel supercomputer? That there’s not even any mention of the computing platform used for comparison is terribly disconcerting…

The quantum computer concept is a marketing scheme to get government funds for physicists to play with electronic stuff.

Link to this… but will it run “Flight Simulator”?

Link to thisgood article…best i’ve read trying to explain quantum computing to the layman…no small challenge i’m sure…lost me though at “If a qubit is 40% 1, you can add 20% more 1 such that the final probability of getting a 1 is about 3.5%”…even given negative probabilities, we seem to be losing a lot of 1 here…

Link to thishttp://www.nature.com/news/simulation-quantum-leaps-1.11806

“Quantum simulators might even be able to model non-quantum problems, such as protein folding, that still require huge amounts of computing power to decipher. A group at the Canadian company D-Wave Systems in Burnaby and at Harvard University in Cambridge, Massachusetts, recently did just that by mathematically mapping the folding problem onto a quantum system of 128 loops of current spinning on a superconducting chip[4]. Each loop could spin clockwise, anticlockwise or in a superposition of both directions simultaneously. The performance of the system wasn’t great; in one of its protein-folding problems, it found the correct, experimentally determined, answer just 13 out of 10,000 times. Still, says Alán Aspuru-Guzik, a theoretical chemist from Harvard and co-author of the paper, “it’s remarkable to me that it was possible to do it” at all.”

2. http://www.nature.com/srep/2012/120813/srep00571/full/srep00571.html

Link to thisI did find some further reference to the very briefly mentioned performance test – see http://blogs.nature.com/news/2013/05/quantum-computer-passes-speed-test.html

“The D-Wave Two computer, which has 512 quantum bits, is designed to tackle classification-type problems that are useful in machine learning and image recognition. Essentially it is good at determining the best ways to sort things into different categories, such as X-ray scans that contain an image of a bomb and ones that don’t.

“McGeoch compared a 439-qubit version of D-Wave to a commercial product from IBM designed to solve the same sorts of problems. The IBM product is designed to deliver a confident answer to a given problem after 30 minutes. McGeoch found that D-Wave did just as well at finding the right answers, but in a half-second run time. That’s 3,600 times faster. “It was really amazing,” she says.

“On other sorts of problems, the D-Wave computer was slowed down by having to ‘translate’ the question for their quantum chip, using a conventional front-end computer. In these cases it was about the same speed as conventional computers; overall McGeoch gave it an “above average” grade.”"

I presume the unspecified IBM computer is parallel processing supercomputer (of the ‘Watson’ ilk). The quoted 3600 fold improvement seems to be associated with the very specific set of classification and optimization problems that the very limited D-wave computer can address. The IBM system is almost certainly capable of of addressing a much broader range of problems that the specialized D-wave single-function computer.

Also see http://www.nature.com/news/computing-the-quantum-company-1.13212

Link to thisVery nice article – As a recently retired engineer, who designed circuits for about a dozen computer generations I hope for the engineers and scientist at D-Wave to experience the thrill of building a machine that ushers in a revolution in computing. On the other hand for Google’s sake I hope its a dud….

Link to thisArticle author here.

@jtdwyer: The original paper can be found at http://www.cs.amherst.edu/ccm/cf14-mcgeoch.pdf. I didn’t see the full details of the test as being necessary for an article of this scope, but to quote from the paper:

“All software solver tests were carried out on a suite of seven Lenovo ThinkStation S30 0568 workstations, each containing one Intel Xeon E5-2609/2.4GHz Quad-Core processor with 16GB RAM. The operating system was Ubuntu

64-bit 12.04 LTS.”

Your point about the comparison being somewhat meaningless is well-taken, and is made (though perhaps not as explicitly as it could have been) in the next paragraph: “It’s no surprise that [the off-the-shelf packages] couldn’t compete with a processor tailored to the problems being tested.”

Link to thisOf course, a ‘student’ of pure mathematics would dispute that the bit is the only measure of information.

ZF can be defined with or without constants. Constants are sometimes modeled as sets that are members of themselves (using the anti-foundation axiom, say), with a series of finite sets scattered along an infinite descent chain. If we define a constant as a countable bit string implemented via the obvious definition, we have an alternative set theory that has both finite information … bits … and countable bit-strings. Using Turing machine theory then requires such artifices as a functor that maps finitely definable real number definitions to the respective constant, the every function has a rule concept, and we then have a system that may or may not correspond to the real world, the consistency of which is an interesting problem in set theory … probably still an open problem, I haven’t kept up since I did 618-341 Mathematical Logic in 1982 I don’t know!

Andrew the Mathematical Platonist!

Link to thisjessed,

Thanks very much for the critical information. I’d incorrectly presumed that the ‘IBM’ computer was a supercomputer. The 7 networked workstations can be acquired for around $20k. I can’t offhand compare their processing capacity to any of their recent supercomputers – see http://en.wikipedia.org/wiki/IBM_Sequoia that exceeds 10 petaflops – but I’d certainly expect that their supercomputer products outperform their workstations in computationally intense processes.

See http://en.wikipedia.org/wiki/D-Wave_Systems and

http://www.extremetech.com/article2/0,2845,2385665,00.asp

“The cost will be approximately $10,000,000.”

Just taking the multimillion dollar cost mentioned at face value, comparing the performance of such a system to a set of workstations costing on the order of $20K is ridiculous.

This discussion is obviously only meaningful for order of magnitude comparisons, but the point is that, if a problem warrants consideration of a multimillion dollar solution, it should be compared with other potential, multimillion dollar, alternative solutions.

Link to thisAny comparisons between Saturn V rockets and 3600 pop-bottle rockets are invalid and of no value…

With regards to the efficiency of the D-Wave AQC being perhaps only as good as a laptop consider this:

“The machine Google will use at NASA’s Ames Research facility, located near Google headquarters, makes use of the interactions of 512 quantum bits, or qubits, to determine optimization. They plan to upgrade the machine to 2,048 qubits when this becomes available, probably within the next year or two. That machine could be exponentially more powerful.”

SOURCE: http://bits.blogs.nytimes.com/2013/05/16/google-buys-a-quantum-computer/

Moore’s law applies to digital computers but the D-Wave AQC is leaving Moore’s law for dead. So while Google’s D-Wave AQC may be an overpriced laptop computer now, what feats will it be capable of in one to two years time? Certainly a lot better than double it’s existing capacity! A quantum computer with 1,000 qubits could calculate the position of every atom in the universe. So what would this upgraded computer and its further upgrades be capable of? Does anybody have an answer to this question and can they tell me how 2,048 qubits compares (more precisely than just the word “exponentially”) with 512 qubits?

Link to thisI can now answer my own question.

This article says:

‘ “We could map the whole Universe — all of the information that has existed since the Big Bang — onto 300 qubits,” Lloyd says.’

http://www.huffingtonpost.com/2013/07/29/quantum-computers-ai-artificial-intelligence-studies_n_3664011.html

Google’s D-Wave quantum computer has 512 qubits and is going to be upgraded to 2048 qubits – so how powerful (potentially) is that?

Perhaps it needs more programming?

This article says:

“… unlike traditional computers whose processing power grows linearly, the quantum machines’ power grow exponentially, blowing Moore’s Law out of the water. Moving from a 128-qubit computer to today’s 512-qubit machine increased processing power by 300,000 times…”

http://gcn.com/Articles/2013/09/10/dwave-quantum-computing.aspx?Page=1

So I suppose the upgrade to Google’s computer in a year or so will make it 300,000 times faster than it is already. And D-Wave wants to double the number of qubits each year.

Link to this