Skip to main content

Scott Aaronson Answers Every Ridiculously Big Question I Throw at Him

Quantum-computer whiz riffs on simulated universes, the Singularity, unified theories, P/NP, the mind-body problem, free will, why there’s something rather than nothing, and more.

Scott Aaronson on utopia: “I love when the human race gains new knowledge, in math or history or anything else.  I love when important decisions fall into the hands of people who constantly second-guess themselves and worry that their own ‘tribe’ might be mistaken, who are curious about science and have a sense of the ironic and absurd.  I love when society’s outcasts, like Alan Turing or Michael Burry (who predicted the subprime mortgage crisis), force everyone else to pay attention to them by being inconveniently right.  And whenever I read yet another thinkpiece about the problems with ‘narrow-minded STEM nerds’—how we’re basically narcissistic children, lacking empathy and social skills, etc. etc.—I think to myself, ‘then let everyone else be as narrow and narcissistic as most of the STEM nerds I know; I have no further wish for the human race.’” Photo: Sabine Hossenfelder blog, “BackReaction."

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Scott Aaronson has one of the highest intelligence/pretension ratios I’ve ever encountered. I wasn’t really aware of him before last fall, when I attended a conference at New York University on an ambitious new theory of consciousness, integrated information theory. Most speakers touted IIT or tried to tease out its implications. The striking exception was Aaronson, a boyish (he turns 35 on May 21 but looks younger) computer scientist at MIT (soon leaving for the University of Texas—too bad, MIT!). Although at first he seemed nervous, even jittery, he proceeded to demolish IIT. He focused on a key IIT variable, phi, which denotes the inter-connectivity, or synergy, of the parts of a system. The more phi a system has, the more consciousness it has, supposedly. Aaronson argued—or showed, actually--that IIT’s mathematical definition of phi implies that a simple information-storage device, like a compact disc, can be more conscious than a human being. Proponents of IIT, including neuroscientists Guilio Tononi and Christof Koch and physicist Max Tegmark, raised objections to Aaronson’s critique, but he amiably—and devastatingly--rebutted them. Who is this guy? I wondered. Browsing Aaronson’s blog, “Shtetl-Optimized,” I discovered that he writes not only about quantum computation, his specialty, but also about artificial intelligence, mathematics, cosmology, particle physics, philosophy… Aaronson has things to say about almost everything. Even when he is at his most technical, he expresses himself in a down-to-earth, funny, self-deprecating and above all clear way. He exudes the spunky enthusiasm and curiosity of a 10-year-old kid, a kid who happens to have a firm grasp of mathematics and physics. He thinks I’m wrong about the end of science, and that’s fine with me. Hell, he might be right! [See Addendum.] I won’t say more about him here, because I don’t want to embarrass him--or myself--more than I already have, and because he reveals so much of himself in what follows. Warning: this is an extra-long Q&A, but if you read it, I predict, you too will become an Aaronson fan. –John Horgan

1. Have you become what you wanted to be when you were a kid?

Come on, that’s too high a bar!  When I was a kid, I wanted to be the founder and ruler of a rationalist space colony, who also wrote video games and invented the first human-level AI and led a children’s liberation movement and discovered the mathematical laws underlying society.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


On the other hand, as far as childhood dreams go, I have no right to complain.  I have a wonderful wife and three-year-old daughter.  I get paid to work on engrossing math problems and mentor students and write about topics that interest me, to do all the things I’d want to do even if I weren’t getting paid.  It could be worse.

2. Why do you call your blog “Shtetl-Optimized”?

I get that a lot.  It’s one of those things, like a joke, that dies a little when you have to explain it—but when I started my blog in 2005, it was about my limitations as a human being, and my struggle to carve out a niche in the world despite those limitations.  It also gestured toward the irony of someone whose sensibility and humor and points of reference are as ancient as mine are—I mean, I already felt like a senile, crotchety old man when I was 16—but who also studies a kind of computer that’s so modern it doesn’t even exist yet.

Shtetls were Jewish villages in pre-Holocaust Eastern Europe.  They’re where all my ancestors came from—some actually from the same place (Vitebsk) as Marc Chagall, who painted the fiddler on the roof.  I watched Fiddler many times as a kid, both the movie and the play.  And every time, there was a jolt of recognition, like: “So that’s the world I was designed to inhabit.  All the aspects of my personality that mark me out as weird today, the obsessive reading and the literal-mindedness and even the rocking back and forth—I probably have them because back then they would’ve made me a better Talmud scholar, or something.”  So as I saw it, the defining question of my life was whether I’d be able to leverage these traits from a world that no longer existed, for the totally different world into which I was born.

Of course, there are pockets where the shtetl still does exist; there are orthodox Jews.  As it happens, I went to an orthodox Hebrew day school, where I was exposed to that.  But by the time I was 12, and was reading Bertrand Russell and Richard Dawkins and Carl Sagan and Isaac Asimov and so forth, it was obvious to me that I could never be a believer in any conventional sense, even if I’m happy to use Einsteinian pseudo-religious language, as in “why did God make the world quantum rather than classical?”  So from then on, the thing I yearned for was a community that would be as welcoming of intellectual obsessives as a yeshiva was—but without any unquestioned dogmas or taboos, where absolutely anything could be revised based on evidence, and which was open to new ideas from anyone of any ethnicity.

In my quest for such a community, I could’ve done a lot worse than where I ended up, namely academic computer science departments!  The difference, of course, is that a university department covers only the intellectual aspects of life, whereas my idealized shtetl would be a place that welcomed intellectual oddballs and also helped them deal with birth, death, marriage, and everything else in their lives.

There are two other aspects to “shtetl-optimized.”  The first is my preoccupation, not just with the past, but with one particular slice of the past.  Even though I was born in 1981, for me, the first half of the twentieth century is and always will be “the present,” and whatever this is now is the future!  Granted, there are certain cool things about living in the future, like the Internet, and seedless watermelons.  But as I don’t need to tell you, the early twentieth century is when Einstein and Gödel and Turing and so many others discovered things that could only be discovered once.  It’s when we went from horses and buggies to rockets that left the earth’s atmosphere.  I spent my teenage years devouring dozens of biographies of early-twentieth-century scientists and mathematicians and philosophers, reliving their triumphs as well as the loss of everything they knew in the two world wars.  In some sense, their world was more real to me than the stuff around me.

Which brings me to another aspect of “shtetl-optimized”: the grief over what was destroyed, and over the world’s indifference while it was happening.  The Holocaust has been the central event in my mental life since I was probably seven.  Like, if I see a digital clock that reads “9:43,” the first thought to cross my mind will be: “1943—over two million Jewish men, women, and children already lying dead in pits, but the Allies still could’ve bombed the train tracks to Auschwitz and Sobibor, had they wanted to...”  In discussions about nuclear proliferation, global warming, and so forth, I never make the slightest apologies for being paranoid about humanity’s future, because those members of my extended family who weren’t sufficiently paranoid to flee to the United States were, as far as is known, all murdered.  I never accept anyone’s assurances that “everything will probably work out for the best.”  The question, for me, is never whether to be paranoid, but only which things to be paranoid about!

I’m gratified that many people have described me as warm and friendly and helpful (“surprisingly so,” one can almost hear them add, for such a socially-inept, self-obsessed nerd!).  But there’s a reason for that.  If I meet a new person, and they aren’t weird in the same ways I’m weird, my brain’s first questions tend to be: would this person be happy to rid the earth of me and everyone like me, regarding me as genetically defective?  Is he or she merely temporarily prevented from doing so?  In 1942, would he or she have smiled (as so much of Europe did smile) as I was loaded onto a cattle car?  So then, if the person turns out—as most often they do—to be perfectly nice and decent, I’m so relieved and grateful that it’s like, how can I be anything but friendly and helpful in return?

And speaking of people who regard me as defective: because of a heated discussion around gender politics on my blog last year, some Internet commentators misinterpreted “shtetl-optimized” in the most vicious and spiteful way imaginable.  They said it meant that I must be a misogynist monster, who yearned for a time and place when, if you were a boy who studied Talmud hard enough, society would just grant you a wife, regardless of her feelings about it.  It shouldn’t need to be said that forced marriage is monstrous, and that all decent people should oppose it always and everywhere—not only in the distant past (shtetls or anywhere else), but in those parts of the world where it’s still the norm.

It’s true that, when I was a lonely, depressed young person, I yearned for a culture with clearer rules around courtship—where there was an accepted, socially-sanctioned way to find out whether your romantic interest in another person might be reciprocated, without needing to get drunk first, or master vague protocols that are never explained in words, or speak in euphemisms with CIA-like plausible deniability, and with no guilt or shame or embarrassment attached to the thing.  But it seems to me like such a courtship culture would benefit lots of people, male and female, gay and straight!  So it’s totally unclear to me that I was wrong to want that.  Now, you could argue that the old clarity around courtship needed to go, because it stood in the way of women’s liberation or individual freedom or other values that were ultimately more important.  But you could also argue that these things have nothing to do with each other: that the loss of clarity was just a tragic byproduct of other social changes and could be reversed, consistently with modern values, if enough people wanted it to be.  In any case, I suppose another thing it means to be “shtetl-optimized,” is that I never get tired of arguing that sort of question, even when getting tired of it would be to my benefit.

3. Do you write a lot because your father (according to Wikipedia) was a science writer?

Both my parents were English majors; neither were scientists.  But my dad did start his career as a popular science writer in the 1970s.  Like you, I suppose, he interviewed famous physicists like Steven Weinberg and John Archibald Wheeler and Arno Penzias, who then became well-known names around the house.  He even wrote for Playboy and Penthouse—for the few who read the articles, I guess!—about topics like the preponderance of matter over antimatter in the visible universe.  He said they paid better than the science magazines.

So, my dad made me aware from an early age that there was a Big Bang, that there was a speed of light that you could approach but not exceed (and what its value was), all sorts of things like that, and he turned me on to popular science and science fiction (especially Isaac Asimov).  My dad also made me yearn to explain all the math and computer science I was learning in plain English: if nothing else, I wanted to be able to explain it to him!  Finally, my dad was my main writing critic, constantly telling me to be more concise (alas, judging from this interview, he should’ve pushed even harder!).  Obviously I can’t do the experiment of replacing him by a “control dad” and rerunning my life to see what happens—but insofar as I can judge, he and my mother were both huge, positive influences on me.

4. Do you see it as your duty to expose scientific bullshit?

On reflection, no—because if I have such a duty, then presumably my colleagues do too, but I wouldn’t want to impose such a duty on my colleagues!  I have brilliant colleagues who choose to spend their time doing creative, original science, rather than refuting every charlatan who comes along: they figure the latter get handled automatically by the marketplace of ideas, or maybe even that acknowledging certain ideas gives them more legitimacy than they deserve.  And that’s a valid choice.

For me, it’s more a matter of my emotional makeup.  I see one of my genius colleagues labor for years on a deep theorem that maybe five or ten people will understand, and that not many more will care about.  Then some know-nothing claims they’ve built an analog computer that can solve NP-complete problems in polynomial time, or whatever, and their claim makes it onto Slashdot and Reddit and Twitter and news websites (but not Scientific American, of course!), where tens of thousands of people see it.  And the juxtaposition just makes my blood boil.  And because I happen to write a blog, people keep sending me emails or leaving comments to ask for my reaction to the news (as if they couldn’t predict it).  And I think: I can do something about this!  And I’m almost complicit if I don’t.

There’s also the issue of comparative advantage.  I tell myself that I spend a lot of time on my blog arguing against BS precisely so that my smarter colleagues don’t have to!  Like, I’m a damn good theoretical computer scientist (and modest too), but I’m not at the absolute pinnacle of the field—so rather than trying to ascend the summit myself, sometimes I can serve the interests of science better by staying on the lower slopes, and just trying to defend the mountain from the forces catapulting dung at it.

5. How did you get interested in quantum computation?

As the author of The End of Science (which I read as a teenager), I’m sure you can appreciate one reason: because quantum computing, in the 1990s, was this profound story at the intersection of computer science, physics, engineering, math, and philosophy that was only just beginning rather than ending.  The field was, and remains, a major source of counterexamples to your thesis about everything fundamental already having been discovered!

But to step back a little: when I discovered BASIC programming as an 11-year-old, it wasn’t like, this is a cool practical skill—even though it had been a dream to create my own video games, and now I could finally do that.  It was more of an intellectual revelation, like finding out where babies came from.  I thought: so this is what it means to understand something.  It means that you know how to express it in these lines of code, how to make a computer do it.  So immediately I started asking myself: could there be other programming languages, or even other kinds of computers entirely, that would let you express things that could never be expressed in BASIC?  And then I learned about the Church-Turing Thesis, which says that no, all sufficiently powerful computers and programming languages are fundamentally equivalent: they can all simulate each other, albeit faster or slower and using more or less memory.  In learning the syntax of MS-DOS QBASIC, in that sense you’ve learned the rules of the entire universe.

So that became a central part of my worldview.  It told me that, even if I wanted to understand the physical world we inhabit, I didn’t have to bother much with the details of physics!  The Standard Model, general relativity, those are just yet more programming languages, more ways of combining simple mathematical building blocks into complicated emergent behavior.  And the whole point of the Church-Turing Thesis is that once you know one programming language, you basically know them all.

But then, maybe when I was fourteen, I read a popular article about quantum computing, and about Peter Shor’s quantum factoring algorithm, which had just recently been discovered.  And my first reaction was, this sounds like crackpot nonsense.  It’s probably just physicists who don’t understand the enormity of what it is that they’re denying—who don’t get the overarching principle that everything around us, what we call “space” and “matter,” is just a huge, three-dimensional array of 1’s and 0’s being subjected to Boolean logic operations.  That principle clearly overrides the physicists’ grubby approximate theories of “particles” and “fields” and whatnot, and it clearly implies that this business of “factoring numbers in trillions of parallel universes” can’t work, or at least can’t scale to large numbers.

But then I learned basic quantum mechanics!  And I found out that, yes, the discoverers of QM in the 1920s did know the enormity of what they were denying (some of them more than others), but they’d found something else of comparable enormity.  And that accepting quantum mechanics didn’t mean giving up on the computational worldview: it meant upgrading it, making it richer than before.  There was a programming language fundamentally stronger than BASIC, or Pascal, or C—at least with regard to what it let you compute in reasonable amounts of time.  And yet this quantum language had clear rules of its own; there were things that not even it let you do (and one could prove that); it still wasn’t anything-goes.

But the real surprise was that I could learn the rules and start playing with them.  I like to say that, after all the forbidding-sounding verbiage you read in popular books, quantum mechanics is astonishingly simple—once you take the physics out of it!  In fact, QM isn’t even “physics” in the usual sense: it’s more like an operating system that the rest of physics runs on as application software.  It’s a certain generalization of the laws of probability.  It says nothing directly about electrons, photons, or anything like that.  It just talks about lists of complex numbers called amplitudes: how these amplitudes change as a physical system evolves, and how to convert them into the probability of seeing this or that result when you measure the system.  And everything you’ve ever heard about the “weirdness of the quantum world,” is simply different logical consequences of this one change to the rules of probability.  This makes QM, as a subject, possibly more computer-science friendly than any other part of physics.  In fact, even if our universe hadn’t been described by QM, I suspect theoretical computer scientists would’ve eventually needed to invent quantum computing anyway, just for internal mathematical reasons.  Of course, the fact that our universe is quantum does heighten the interest!

On the biographical side, I was a teenager in the late 90s, doing a summer internship at Bell Labs in statistical software (one that had nothing to do with quantum computing), when I started studying the main quantum computing algorithms, namely Shor’s and Grover’s algorithms.  (Grover’s algorithm, discovered in 1995, lets you search a list of N items for a desired item in only about N1/2 steps.)  My boss, thankfully, was also curious about quantum computing and let me follow my obsessions.  Soon I learned that Lov Grover, the discoverer of Grover’s algorithm, worked in the same building.  So I sought Lov out, told him my crazy ideas for improving Grover’s algorithm that didn’t work—and then for some reason he offered me an internship with him the next summer.

I spent that second internship trying to prove a lower bound on the number of steps a quantum computer would need to “evaluate AND-OR trees” (to take an example, deciding whether a square grid of black and white cells contains an all-black row).  I failed miserably—although by the end, I knew the existing tools for proving that sort of theorem inside and out.  That summer I also met Ashwin Nayak, a visiting student from Berkeley.  Ashwin clued me in to what was happening right then in quantum computing theory, in the research group at Berkeley centered around Umesh Vazirani, who was one of the first computer scientists to study quantum computing.

After the summer ended, Ashwin wrote to me to tell me that Andris Ambainis, another student of Vazirani’s at Berkeley, had solved the AND-OR problem, by inventing a completely new method.  So I got an early draft of the paper from Andris, and I was blown away.  And I thought: I must go to Berkeley for graduate school.  I must learn whatever it is Andris and the others there know, so that someday I could prove theorems like these.  I wrote to Vazirani to say I wanted to work with him, and he never responded, which of course worried me a lot.  Only later did I learn that he’s famous for not answering anyone’s email!

As an undergrad at Cornell, I’d also been extremely interested in artificial intelligence and machine learning—so when I applied to Berkeley for graduate school, it was the AI people there who took an interest in my application and admitted me.  But by then, my heart was in quantum computing.  And after a year at Berkeley, I’d fallen in with Vazirani’s group.

I still had a fear that I’d never do anything original in this field.  But by the fall of my second year, after months of work, I’d succeeded in solving one of Vazirani’s favorite open problems, which was to rule out a fast quantum algorithm for the so-called collision problem.  In that problem, you’re given a long list of numbers, in which every number from 1 up to N appears many times, and you’re just trying to find a single “collision pair”: that is, two numbers in the list that are equal.  The significance is that, if you had a fast enough quantum algorithm to find collision pairs, that would let you break all sorts of cryptographic codes using a quantum computer—not just the special codes based on problems like factoring, which Shor showed how to break.  Conversely, if you want any hope of fashioning the basic building blocks of modern cryptography so that they’ll still be secure in a world with quantum computers, then you need to rule out such a quantum algorithm.

Anyway, it turned out that Andris Ambainis had invented his method—the one that had bowled me over and lured me to Berkeley—specifically to tackle the collision problem!  And Andris’s method had worked for lots of other problems, including the AND-OR problem, but not for the collision problem.  But in an ironic turnabout, I found that an earlier method, called the “polynomial method”—the one I’d tried unsuccessfully for the AND-OR problem—worked for the collision problem.  It worked because of some miraculous algebraic cancellations that I stumbled on after grueling trial and error, and that I still don’t have a good intuitive explanation for.  The result was that any quantum algorithm to find a collision pair, in a list of numbers from 1 up to N, needs at least about N1/5 steps.  Shortly afterward, Yaoyun Shi improved that to show that any quantum algorithm needs at least about N1/3 steps.  That turns out to be the right answer: there is a quantum algorithm, based on Grover’s algorithm, that finds a collision pair in about N1/3 steps.

(By comparison, a classical algorithm needs about N1/2 steps.  The reason for that N1/2 is related to the famous “birthday paradox”: you only need to gather about 30 people in a room, way fewer than 365, before there’s an excellent chance that at least two of them share a birthday, because what matters is the number of pairs of people.)

After the collision lower bound, one thing led to another, and I’m still doing quantum computing theory 15 years later.  I dabble in various kinds of classical computer science too, and I’m sometimes tempted to switch fields, maybe go back to AI and machine learning after all.  But quantum computing remains so inconveniently interesting that it keeps pulling me back in!

If it were just about building devices to solve certain problems faster, I’m sure my interest would be more limited.  But by this point, quantum computing theory has broadened to include almost anything at the interface between theoretical computer science and physics, and whatever one field can tell the other.  The modern-day collision of the Schrödinger equation with the Turing machine just keeps throwing up more and more stuff, and I don’t see it getting boring anytime soon.

6. What hype about quantum computers really drives you nuts?

The biggest one is when quantum computers are described as processing an unimaginably vast number of answers in parallel—so that Shor’s famous quantum factoring algorithm, for example, would work simply by trying every possible divisor in a different parallel universe.  As I like to say, if it were that simple, you wouldn’t have needed Shor to discover it!  The truth is that while, yes, quantum mechanics lets you create a superposition over an immense number of “branches,” whenever you measure you see only one random “branch.”  And of course, if you’d just wanted a random sequence of numbers, you could’ve flipped a coin, and saved all the trouble of building the quantum computer!

Thus, the hope for a speed advantage from a quantum computer comes not from randomness, but rather, from the fact that quantum mechanics is based on amplitudes, and amplitudes work differently than probabilities.  In particular, if an event can happen one way with a positive amplitude, and another way with a negative amplitude, those two amplitudes can “interfere destructively” and cancel each other out, so that the event never happens at all.  The goal, in quantum computing, is always to choreograph things so that for each wrong answer, some of the paths leading there have positive amplitudes and others have negative amplitudes, so they cancel each other out, while the paths leading to the right answer reinforce.

It’s only for certain special problems that we know how to do that.  Those problems include a few with spectacular applications to cryptography, like factoring large numbers, as well as the immensely useful problem of simulating quantum mechanics itself.  But as far as we know today, they don’t include all problems that involve trying a huge number of possible solutions.  In particular, it looks likely that quantum computers will provide only limited advantages for the NP-complete problems (the Traveling Salesman Problem and so on), which are usually considered the holy grail of computer science.

It’s true that, if you’re trying to simulate a quantum computer using a classical computer, then as far as anyone knows, your simulation needs to keep track of exponentially many amplitudes.  The trouble is that, unlike the classical simulation, which can read or modify any amplitude at will, a quantum computer is severely restricted in what it can do with its huge list of amplitudes.  So, quantum algorithm design is all about how you can sometimes (but not always!) extract an answer to your problem even in the teeth of those restrictions.

A related misconception is that a thousand quantum bits, or qubits, are somehow equivalent to 21000 classical bits, with each additional qubit doubling the number of classical bits.  Here’s the tricky part: if you wanted to describe the state of a thousand qubits, even approximately, you would indeed need something like 21000 classical bits.  But you can’t store 21000 classical bits in a thousand qubits and then reliably read them out later!  In fact, a fundamental result called Holevo’s Theorem says that the number of classical bits that you can reliably read out, by measuring a thousand qubits, is exactly one thousand: no better than if you’d used a classical memory.  Once again, what’s going on is that there’s this huge list of amplitudes, but quantum mechanics lets you access the list only by making a measurement, which is a destructive event that produces just a single random outcome.

There’s a pattern here.  In case after case, we find that if you wanted to simulate quantum mechanics classically, you’d need some immense power.  And that’s given the hypesters and confuseniks this immense opening to mislead people into imagining that quantum mechanics itself must give you the same immense power.  But that’s a logical fallacy!  It’s like, maybe the only way human technology can simulate bird flight is by using propellers or jet engines.  But even if so, that still wouldn’t imply that birds themselves must use propellers or jet engines.  They don’t need to: they’re birds!

Yet another example concerns quantum entanglement between distant particles.  John Bell famously proved in the 1960s that, if you wanted to simulate entanglement in a classical universe, then you’d need faster-than-light communication.  But, contrary to a misconception that refuses to die even today, that doesn’t mean that quantum entanglement itself lets you communicate faster than light.  It doesn’t!  Our quantum universe scrupulously upholds Einstein’s speed limit, even though a classical simulation of our universe would violate the limit.  Indeed, that’s a central piece of evidence that our universe really is quantum, and is not secretly classical behind the scenes.

This characteristic of quantum mechanics—the way it stakes out an “intermediate zone,” where (for example) n qubits are stronger than n classical bits, but weaker than 2n classical bits, and where entanglement is stronger than classical correlation, but weaker than classical communication—is so weird and subtle that no science-fiction writer would have had the imagination to invent it.  But to me, that’s what makes quantum information interesting: that this isn’t a resource that fits our pre-existing categories, that we need to approach it as a genuinely new thing.  Most of the hype that drives me nuts comes from rounding this fascinating reality down to the sorts of thing a science-fiction writer would invent, like “parallelism free-for-all!  Just try each answer in a different universe, and pick the best!”

So far, I’ve focused on “hype” surrounding the conceptual basis of quantum computing.  That’s because I feel like, if you can just get people clear on the conceptual stuff, you’ve given them 90% of what they need to think for themselves about any claimed breakthrough in quantum computing that makes it into the news—to know the right questions to ask.

Needless to say, though, quantum computing has also seen plenty of hype of a more conventional nature.  For example: “COMMERCIAL BREAKTHROUGH—Company X has now used a quantum computer to solve real-world Problem Y a hundred million times faster than a classical computer!”  And then even the most cursory digging reveals that, no, sorry, that’s only if you compare the quantum computer to a classical computer running one particular algorithm (which is far from the best known algorithm); in an apples-to-apples comparison, the quantum advantage disappears.  And that in any case, this wasn’t for a real-world instance of the “real-world problem,” but only for an instance tailored to the strengths of this specific piece of quantum hardware.  And that the precise senses in which the hardware is “quantum” in the first place are still being debated.

In such cases, it’s not usually that anybody lied: it’s just that there was a game of “Telephone,” where the original company or research team explained the crucial caveats in Section 4.2 of its paper, but all the caveats had morphed into one ambiguous sentence by the press release, and had disappeared entirely by the time the thing hit the news websites.  This sort of hype, which we’ve now seen more than a decade of, might have had the ironic effect of inuring people to quantum-computing speedup claims—so much that when we do finally get a genuine quantum-computing speedup, possibly in the near future, people will be less excited than they ought to be!

(As an analogy, the Wright Brothers’ 1903 flights at Kitty Hawk garnered almost no news at the time—one reason being that, in the years leading up to them, there had been so many overblown claims about powered flight that newspaper readers had wearied of the subject.)

Anyway, my blog has dissected more examples of the latter kind of hype than is interesting to probably anyone, including me.

7. Have quantum computers been in any way underappreciated?

Sure!  (More generally, we could probably say: there’s nothing so hyped that it doesn’t have underappreciated aspects.)

One beautiful story, which hardly any journalists have written about, is how we’ve often been able to use quantum computing to achieve a better understanding even of classical computing.  For example, there are certain kinds of error-correcting codes that we know not to exist only because, if they did, then there would be even better quantum error-correcting codes—but the latter we know how to rule out.  That’s just one of dozens of examples of how, even before practical quantum computers exist, the theory of quantum computing has become an important part of classical theoretical computer science.

More broadly, I’d say that people underappreciate quantum computing by viewing it purely through the lens of applications.  A quantum computer could be viewed as the most stringent test of quantum mechanics that we’re going to see in our lifetimes.  And there are smart people who believe it can’t be done—which to me, only heightens the interest in trying to do it still further!  If it’s worthwhile to build the LHC or LIGO—wonderful machines that so far, have mostly triumphantly confirmed our existing theories—then it seems at least as worthwhile to build a scalable quantum computer, and thereby prove that our universe really does have this immense computational power beneath the surface.  Sure, there are some cool applications (with perhaps the most important being quantum simulation), but those are just icing!  The case for building QCs would remain strong even if no applications had been found, and even if the applications that have been found turn out not to have great economic importance.  But unfortunately, that reality has had a hard time making it to the press and funding bodies, who often want to shoehorn quantum computing into the “technology” category rather than the “science” category—as if it were just the latest, fastest microchip, rather than something fundamentally new.

8. Could “Big Data” help social science become scientific?

I’m no expert, but my impression is that in many cases it’s already doing so.  So for example, I follow with great interest the work of Jon Kleinberg, one of my former professors from Cornell, who’s learned about the structure of communities by examining the Facebook graph.  Likewise, my friend Erez Lieberman-Aiden, along with Steven Pinker and others, pioneered the use of Google Books to analyze historical trends, by examining the rise and fall in the use of particular words over time.

On the other hand, we should be clear that a lack of data is only one factor that makes the social sciences so hard—harder, I’d say, than the natural sciences!  The bigger factor, it seems to me, is that unlike (say) particle physics, no one ever approaches the social world de novo: we only ever approach it “already knowing so much that ain’t so.”

In social sciences, there’s an absolutely massive bias in favor of publishing results that confirm current educated opinion, or that deviate from the consensus in ways that will be seen as quirky or interesting rather than cold or cruel or politically tone-deaf.  I have almost boundless admiration for the social scientists who are able to break through that and teach us something new—as for example in the work of Judith Rich Harris, which showed how a child’s “non-shared environment” (the peer group and so forth) is much more important than any parenting practices in shaping personality, contrary to both “common sense” and a century of Freudian dogma.  I couldn’t do that myself.

9. Do you ever worry, like some theoretical physicists, that our universe is a simulation created by superintelligent aliens?

Well, there are two cases: either we can communicate with these aliens, or otherwise get evidence for their existence by examining the universe, or else we can’t.

If we can get evidence, then the aliens are basically just the gods of traditional religions, differing only in details like their motivations or how many arms they have.  In that case, the reason to be skeptical of them is the same reason to be skeptical of traditional religions: namely, where’s the evidence?  Why have these gods/aliens, just like the conspirators who set up Lee Harvey Oswald as a patsy, demolished the Twin Towers from the inside, etc. etc., done such a spectacular job of hiding themselves?

The second possibility is that the simulating aliens belong to a higher metaphysical realm, one that’s empirically inaccessible to us even in principle.  In that case, to be honest, I don’t care about them!  Given any theory of the world that we might formulate involving the aliens, we can simplify the theory by cutting the aliens out.  They’re explanatorily irrelevant.

10. Could quantum-computation research help physicists achieve a unified theory?

There are some theoretical physicists who now think so!  Ideas from quantum computing and quantum information have recently entered the study of the black hole information problem—i.e., the question of how information can come out of a black hole, as it needs to for the ultimate laws of physics to be time-reversible.  Related to that, quantum computing ideas have been showing up in the study of the so-called AdS/CFT (anti de Sitter / conformal field theory) correspondence, which relates completely different-looking theories in different numbers of dimensions, and which some people consider the most important thing to have come out of string theory.  I’ve enjoyed being peripherally involved in these developments, as a “computer science mercenary” with little skin in the game, but who’s happy to talk to anyone from any discipline (biologists, economists, string theorists, you name it) who’s stumbled onto interesting theoretical computer science questions!

There are a few reasons why I think quantum computing ideas have been showing up lately in fundamental physics.  Firstly, quantum computing has supplied probably the clearest language ever invented—namely, the language of qubits, quantum circuits, and so on—for talking about quantum mechanics itself.  This is a language that’s already seeped into optics and condensed-matter physics and quantum chemistry and various other things; no surprise to see it in quantum gravity too.  Secondly, one of the most important things we’ve learned about quantum gravity—which emerged from the work of Stephen Hawking and the late Jacob Bekenstein in the 1970s—is that in quantum gravity, unlike in any previous physical theory, the total number of bits (or actually qubits) that can be stored in a bounded region of space is finite rather than infinite.  In fact, a black hole is the densest hard disk allowed by the laws of physics, and it stores a “mere” 1069 qubits per square meter of its event horizon!  And because of the dark energy (the thing, discovered in 1998, that’s pushing the galaxies apart at an exponential rate), the number of qubits that can be stored in our entire observable universe appears to be at most about 10122.

So, that immediately suggests a picture of the universe, at the Planck scale of 10-33 meters or 10-43 seconds, as this huge but finite collection of qubits being acted upon by quantum logic gates—in other words, as a giant quantum computation.

(Having said that, I confess I’m left cold by the interminable philosophical debates whether the universe “really is” a computation.  Like, once you’ve signed onto the reductionist program at all, it’s completely obvious and unremarkable that the universe can be regarded as some sort of computation, so the only interesting questions concern which sort!  Quantum or classical?  How many qubits?  Etc.)

Thirdly, and this is the part that’s new in the last few years: some of the conceptual problems of quantum gravity turn out to involve my own field of computational complexity in a surprisingly nontrivial way.  The connection was first made in 2013, in a remarkable paper by Daniel Harlow and Patrick Hayden.  Harlow and Hayden were addressing the so-called “firewall paradox,” which had lit the theoretical physics world on fire (har, har) over the previous year.

The firewall paradox involves a thought experiment where Alice—it’s always Alice—sits outside of a black hole waiting for it to mostly but not completely evaporate, and scooping up all the Hawking radiation it emits as it does so.  For a black hole the mass of our sun, this would take about 1067 years (we’ll assume Alice has a really long grant).  Then, Alice routes all the photons of Hawking radiation into her quantum computer, where she processes them in such a way as to prove that they did encode information about the infalling matter.  Then, as the final step, Alice jumps into the black hole.  The clincher is that, if you combine all the ideas about black holes that had previously been accepted, you can now make a firm prediction that Alice will encounter an end of spacetime right at the event horizon (in the physicists’ colorful language, she’ll “hit a firewall and burn up”).  But this is totally contrary to the prediction of general relativity, which says that Alice shouldn’t notice anything very special at the event horizon, and should only encounter an end of spacetime at the singularity.

There are various ways out of this that aren’t very satisfying: you could deny that information escapes from black holes. You could say that general relativity is wrong, and what we’d previously called black holes are really just firewalls.  You could argue that what happens inside a black hole isn’t even within the scope of science—since much like life after death, it’s not empirically testable by anyone who “remains on this side.”  Or—and this seems like the “conservative” option!—you could admit that Alice can create a firewall by doing this crazy processing of the Hawking radiation, but insist that, if she doesn’t do the processing, then she’ll pass through the event horizon just like general relativity always said she would.  But if you take this last option, then what Alice perceives as the structure of spacetime—whether she encounters an event horizon or a firewall—will depend on what she programmed her quantum computer to do.

But we haven’t even gotten yet to Harlow and Hayden’s technical contribution.  They asked, supposing Alice wanted to program her quantum computer to create a firewall, how hard of a problem would her quantum computer need to solve?  And they gave strong evidence that the problem would require an amount of time that grows exponentially with the number of qubits in the black hole—meaning, not a “mere” 1067 years, but 210^67 years!  In other words: they said that if standard conjectures in theoretical computer science are true, then Alice couldn’t have made a dent in the problem before the black hole had already evaporated anyway, and there was nothing to jump into.  So maybe that makes us feel better about the whole thing!

Now, Harlow and Hayden’s evidence that Alice’s computational task was exponentially hard, even for a quantum computer, relied on the quantum lower bound for finding collision pairs that I’d proved in 2002.  Of course, when I’d proved that bound, I had no idea it would have anything to do with black holes, or the computational intractability of mucking up the structure of spacetime, or anything like that!  But once the connection was made, I sort of had no choice but to become interested.  Recently, I’ve strengthened Harlow and Hayden’s result, so that now the hardness of creating a firewall no longer depends on the hardness of finding collision pairs—something that I’d proven was hard in the “generic” or “black-box” case, but which we’re less certain is hard in the case relevant to firewalls.  Now the argument depends only on the existence of “injective one-way functions”: that is, functions that are easy to compute, hard to invert even using a quantum computer, and free of all collision pairs.  And that seems like almost as safe an assumption as NP-complete problems being hard for quantum computers.

More recently, in ongoing joint work with Leonard Susskind—who’s been sort of the godfather of this whole computational complexity / quantum gravity connection—we’ve given evidence that quantum computing theory also shows up in the AdS/CFT correspondence.  Specifically, if you take something geometric that happens in certain spacetimes—say, a wormhole connecting two regions, which just stretches out, getting longer and longer forever—there’s a “dual description” in quantum field theory, involving a quantum state on a bunch of qubits that gets more and more complicated as time goes on.  The way we measure “complicated” here is using what’s called quantum circuit complexity: that is, the minimum number of elementary operations that a quantum computer would need to prepare the state in question, starting (let’s say) from a state of all 0’s.  Susskind and I proved that, assuming certain problems (called the PSPACE-complete problems) are as hard for quantum computers as computer scientists believe they are, it follows that the circuit complexity of the state really does go up and up, in a way that matches the volume of the wormhole.

So, is this telling us that quantum circuit complexity plays some fundamental role in the laws of physics, analogous to more familiar quantities like length and volume and energy and entropy?  I hesitate to say so, since the “observed correlation” between complexity and volume might be explainable by some third factor.  But at the least, quantum circuit complexity has established itself as a useful tool.

In summary, I predict that ideas from quantum information and computation will be helpful—and possibly even essential—for continued progress on the conceptual puzzles of quantum gravity.  But even if so, one thing I know for sure is that these ideas won’t be sufficient!  Even if quantum computing provides the best language ever devised for talking about quantum mechanics—still, like any other language, it’s only as good as what you do with it, and it’s susceptible to the “garbage in / garbage out” problem.  Also, unlike (say) Stephen Wolfram or Ed Fredkin, I don’t expect any progress to come from junking everything that’s been learned in theoretical physics over the last century, and “starting afresh” with classical bits and cellular automata.  So much intelligence has already been expended on discovering the fundamental laws of nature that, if further definite progress is possible at all, I expect it to “take everything we’ve got”: that is, everything that’s already understood about the Standard Model and general relativity, lessons from strings and AdS/CFT and other quantum gravity proposals, insights from novel parts of mathematics (yes, possibly including theoretical computer science and quantum computation) … and needless to say, some new clues from experiment wouldn’t hurt either.

11. Will science ever explain why there’s something rather than nothing?

By definition, I’d say, a “scientific explanation” means a causal lever: that is, some aspect of reality that you could toggle in order to turn the thing you’re trying to explain on or off.  For example, the earth’s tilt is a good explanation for the seasons, because if you untilted the earth, you’d get no more seasons.  But what lever could you toggle in order for there never to have been anything?  Whatever it was, the lever itself would presumably be “something”!  Even if flipping the lever caused everything (including the lever itself) to blink out of existence, the lever still would have existed, and its previous existence would remain unexplained.

So that leaves only logical or mathematical explanations.  I’ve heard people wax poetic about the possibility of discovering equations of physics so compelling that they “force” there to exist a universe for them to describe, or something like that.  But that’s always struck me as just a category error!  The most beautiful equation is as happy as a clam for none of its solutions to have any physical reality, or any reality that’s consciously experienced by anyone.  (Of course, if nothing existed, then we wouldn’t be here to talk about it—but that observation, while correct, doesn’t really deserve to be dignified with the name “explanation”!)

So I’d say no: because of the very nature of explanations, there can’t be an explanation (scientific or otherwise) for why there’s something rather than nothing.

12. Could quantum-computation research help solve the mind-body problem?

What, so it’s not enough to break most of the world’s cryptography, simulate the universe at the atomic scale, and possibly even give crucial insights about quantum gravity?  You also want us to solve the mind-body problem??

I should confess to extreme skepticism that there can even exist a “solution” to the mind-body problem.  The reason is that, no matter what scientific theory Alice proposed for consciousness, Bob could always come along and say “aha, but you’ve merely given me another causal mechanism; you haven’t explained what truly lights the spark of Mind!”

On the other hand, I can tell you that David Deutsch, who along with Richard Feynman was one of the inventors of quantum computing, became interested in the subject for reasons that were deeply entangled (har, har) with the mind-body problem.  Deutsch was, and remains, a diehard proponent of the Many-Worlds Interpretation of quantum mechanics.  The MWI, as you know, posits that quantum states never “collapse” on being “measured”—that instead, we should just apply quantum mechanics’ equations consistently to the whole universe, in which case, the universe itself would have to be in a quantum superposition state, containing trillions of parallel copies of us living slightly different lives.  And Deutsch asked the question, which I’m sure resonates with you: how could one ever experimentally test the Many-Worlds picture?

Here Deutsch had the following thought: suppose you could do a quantum-mechanical interference experiment on yourself.  That is, rather than sending a photon or a buckyball or whatever through Slit A with some amplitude and through Slit B with some other amplitude, suppose you could do the same thing with your own brain.  And suppose you could then cause the two parallel “branches” of your experience to come back together and interfere.  In that case, it seems you could no longer describe your experience using the traditional Copenhagen interpretation, according to which “the buck stops”—the wave of amplitudes probabilistically collapses to a definite outcome—somewhere between the system you’re measuring and your own consciousness.  For where could you put the “collapse” in this case?  You can’t have Bohr and Heisenberg’s famous divide between “the observer” and the “the quantum system” if the observer is the quantum system!

Now, your brain is such a big, hot, wet object, with so many uncontrolled degrees of freedom coupled to the external environment, that even a hyper-advanced civilization of the far future might never be able to do the experiment I just described.  But, OK, what if we could build an artificially-intelligent computer, have everyone agree that the computer was “conscious,” and then put it in a superposition of thinking two different thoughts and measure the interference pattern?  At that point, everyone would have to accept that conscious entities can exist in superposition states, just like the Many-Worlds Interpretation always said!

As you can see, Deutsch wasn't trying to “solve” the mind-body problem, but he was perhaps pointing out a new aspect of it.  For hundreds of years people asked: is your having a mind, a soul, compatible with someone else knowing your complete “code”: for example, the exact state of every subatomic particle in your brain?  Quantum mechanics lets us ask a related question: is your having a mind compatible with someone else being able to manipulate you in superposition, with their seeing the interference between two versions of you that think different thoughts?

Now, after pondering the latter question for a while, we might want to step back and ask some “easier” variants.  For example, could there be, if not a mind, then at least a computer that performed a superposition of several different computations, such that we could then learn something interesting by examining the interference between the branches?  Could such a computer actually be built?  So, that’s sort of a cartoon version of how Deutsch came up with quantum computing.

To eliminate any chance of misunderstanding: my prediction is that yes, useful quantum computers will eventually be built, and their existence will probably have some impact on how quantum mechanics is perceived in our culture, and as a consequence, on how people talk about whether consciousness collapses the state vector and things of that kind.  But on the whole, the mind-body problem will remain just as contentious and seemingly unresolvable as it was in the world with only classical computers and not quantum ones—or as it was in our previous world, the one with no programmable computers at all.

13. Why is the P versus NP Problem important? Is it solvable?

P versus NP is a contender for the most important unsolved problem in math.  For those tuning in from home, P stands for Polynomial Time.  It’s the class of all yes-or-no problems that a digital computer can solve “efficiently”—meaning, using a number of steps that grows at most like the number of bits needed to specify the problem raised to some fixed power.  Some examples are: I give you a map, and I ask whether every town is at most 200 miles from every other.  Or I give you a positive integer, and I ask whether it’s prime.  NP stands for Nondeterministic Polynomial-Time.  It’s the class of yes-or-no problems for which, if the answer is “yes,” there’s a short proof that a computer can efficiently check.  An example of an NP problem is: I give you a positive integer, and I ask whether it has at least five divisors.  No one knows a fast algorithm for the latter problem: indeed, the presumed hardness of this sort of problem (for classical computers, anyway!) is the basis for most modern cryptography.  Still, if the answer is “yes,” you could prove it to someone by just showing them the divisors.

Clearly P is contained in NP, since if you can solve a problem yourself, you can also be convinced that it’s solvable.  The question is whether NP is contained in P: in other words, if computers can quickly check an answer to something, can they also quickly find an answer?  Most people conjecture that the answer is no—that is, P≠NP—because it seems obvious that there are some puzzles, like (say) a giant Sudoku, where it’s easy to check if someone else solved them, but solving them yourself would require examining an astronomical number of possibilities.  I like to joke that if we were physicists, we would’ve simply declared P≠NP to be a “law of nature,” and given ourselves Nobel Prizes for our “discovery”!  Still, after more than half a century, no one has mathematically proven P≠NP: no one has ruled out that all these NP problems might have a super-fast algorithm that avoids brute-force search and cuts straight to the answer.

Why is the problem important?  The Clay Mathematics Institute chose it as one of the seven great math problems of our time (alongside the Riemann Hypothesis and five others), each of which carries a million-dollar prize—but that’s honestly the least of it.  For one thing, P vs. NP is the only one of the seven Clay problems that has obvious practical implications.  For example, breaking almost any cryptographic code can be phrased as an NP problem.  So if P=NP—and if, moreover, the algorithm that proved it was “practical” (meaning, not n1000 time or anything silly like that)—then all cryptographic codes that depend on the adversary having limited computing power would be broken.  Unlike with (say) Shor’s factoring algorithm, this wouldn’t apply only to special forms of cryptography that happen to be popular today, and it also wouldn’t require the codebreakers to build a new kind of computer.  It would mean that we’d grossly underestimated the abilities of our existing computers.

Beyond cryptography, a huge fraction of the “hardest” things we try to do with computers—for example, designing a drug that binds to a receptor in the right way, designing an airplane wing that minimizes drag, finding the optimal setting of parameters in a neural network, scheduling a factory’s production line to minimize downtime, etc., etc.—can be phrased as NP problems.  If P=NP (and the algorithm was practical, yadda yadda), we’d have a general-purpose way to solve all such problems quickly and optimally, which wouldn’t require any special insight into individual problem domains.

But even those applications aren’t what personally interest me, as much as the way P vs. NP asks about the nature of mathematical creativity itself. That’s the motivation Kurt Gödel offered in 1956, when he posed P vs. NP for possibly the first time, in a now-famous letter to John von Neumann.  As Gödel pointed out in his letter, if mathematical proofs are written in a sufficiently hairsplitting way (like in Russell and Whitehead’s Principia Mathematica), then it’s easy to write a fast computer program that checks, line-by-line, whether a given proof is valid.  That means there’s also a program to check whether a given statement has a proof that’s at most n symbols long: such a program just needs to try every possible combination of symbols, one after the next (like in Borges’ Library of Babel), and see whether any of them constitute a valid proof.  What’s not obvious is whether there’s a program to find a proof of length n, whenever one exists, using a number of steps that grows only like n or n2, rather than like 2n.  That question is essentially P vs. NP.

So if you found a fast computer program to find short proofs, then yes, that would solve one of the seven million-dollar prize problems.  But it would also solve the other six!  For it would mean that, if the Riemann Hypothesis, the Hodge Conjecture, and so forth had proofs of a reasonable length at all, then you could just program your computer to find those proofs for you.  The way Gödel put it was that, if P=NP in a practical way, then “the mental effort of the mathematician could be completely replaced by machines (apart from the postulation of axioms).”

Indeed, it’s easy to get carried away, and wax too poetic about the P vs. NP problem’s metaphysical enormity—as I’ve sometimes been accused of doing!  So let me be clear: P vs. NP is not asking whether the human mind can solve problems that digital computers can’t, which is the much more familiar question surrounding artificial intelligence.  Even if (as most of us think) P≠NP, that still might not prevent a Singularity or a robot uprising, since the robots wouldn’t need to solve all NP problems in polynomial time: they’d merely need to be smarter than us!  Conversely, if P=NP, that would mean that any kind of creative product your computer could efficiently recognize, it could also efficiently create.  But if you wanted to build an AI Beethoven or an AI Shakespeare, you’d still face the challenge of writing a computer program that could recognize great music or literature when shown them.

So, that’s the importance of P vs. NP.  Is it solvable?  The short answer: right now, there’s no compelling reason to think it isn’t!  But it almost certainly won’t get solved anytime soon.

For P vs. NP to be unsolvable would presumably mean that the truth, whatever it was, was unprovable from the usual axioms of set theory.  Gödel taught us that that’s indeed a possibility for essentially any unsolved math problem, with a few exceptions (like the question of whether White has a forced win in chess, which reduces to a huge but finite calculation).  But OK, it was just as much a possibility that Fermat’s Last Theorem would be unsolvable before Andrew Wiles came along and solved it in 1993, and likewise with the Poincaré Conjecture and pretty much everything else in this business!  The truth is that, since its discovery in 1931, the “Gödelian gremlin” has reared its head only very rarely, and then usually for questions involving transfinite set theory, which P vs. NP is not.

So I’d say it’s like anything else in science: sure, you can’t know for sure that your problem is solvable until you’ve solved it.  But as long as you keep discovering interesting things along the way (as we have been, in this case), it would be silly to give up.

There are two further points.  First, almost no one in theoretical computer science—not counting the cranks, whose missives fill my inbox every week!—spends their time directly trying to prove P≠NP.  Why not?  For the same reason why you wouldn’t embark on a manned mission to another galaxy, if you hadn’t even set foot on Mars yet.  There are vastly “easier” conjectures than P≠NP—for example, focusing on severely restricted types of algorithms—that we already don’t know how to prove, so those are the obvious places to start.  Mathematicians and computer scientists have made progress on those easier conjectures, though the progress has taken decades and has run up against profound barriers—some of which were heroically circumvented, only to hit new barriers, and so on.  On the one hand, this progress is what makes me optimistic that further breakthroughs are possible; on the other, it gives a sense for how far there still is to go.

Which brings me to the second point: even assuming P≠NP, I don’t think there’s any great mystery about why a proof has remained elusive.  I mean, Fermat’s Last Theorem took 350 years from the statement to the proof, while the impossibility of squaring the circle took two millennia.  And here we’ve only had, what, a half-century?  And doesn’t P≠NP itself tell us that even easy-to-recognize solutions can be astronomically hard to find?

More seriously, it was realized in the 1970s that techniques borrowed from mathematical logic—the ones that Gödel and Turing wielded to such great effect in the 1930s—can’t possibly work, by themselves, to resolve P vs. NP.  Then, in the 1980s, there were some spectacular successes, using techniques from combinatorics, to prove limitations on restricted types of algorithms.  Some experts felt that a proof of P≠NP was right around the corner.  But in the 1990s, Alexander Razborov and Steven Rudich discovered something mind-blowing: that the combinatorial techniques from the 1980s, if pushed just slightly further, would start “biting themselves in the rear end,” and would prove NP problems to be easier at the same time they were proving them to be harder!  Since it’s no good to have a proof that also proves the opposite of what it set out to prove, new ideas were again needed to break the impasse.

By the mid-2000s, we had results that evaded both the logic barrier identified in the 1970s, and the combinatorics barrier identified in the 1990s.  But then Avi Wigderson and I demonstrated in 2007 that there’s a third barrier, to which even those new results were subject.  And then in 2011, Ryan Williams achieved the next breakthrough: basically, he separated a class of problems that’s vastly smaller than P from another class that’s vastly larger than NP.  This was important less because of the result itself—which still looks pathetically weak compared to P≠NP—but rather, because his proof circumvented all the known barriers to further progress.

Nowadays there are people trying to attack P-vs.-NP-like questions using some of the heaviest artillery available from algebraic geometry, representation theory, and other parts of mathematics, and we don’t yet know where it’s going to lead, but meanwhile there have been surprises every few years, and previously impossible-looking problems that suddenly get solved, and unexpected connections to other parts of math and to practical cryptography and to algorithm design, and of course quantum computing forcing us to reexamine the entire subject from a different angle, and altogether that’s been more than enough to keep the field thriving.

In summary, I don’t think P vs. NP presents a good example for your “End of Science” thesis!  For one thing, there’s no danger of “ironic science” here: for all the broader issues that it touches on, P vs. NP is still “just a math problem,” meaning that we understand exactly what’s being asked and what would or wouldn’t constitute a solution.  And math is cumulative.  With some problems there’s a gap of two hundred years between one insight and the next; other times the ideas come every hour—but either way, the ocean of mathematical understanding just keeps monotonically rising, and we’ve seen it reach peaks like Fermat’s Last Theorem that had once been synonyms for hopelessness.  I see absolutely no reason why the same ocean can’t someday swallow P vs. NP, provided our civilization lasts long enough.  In fact, whether our civilization will last long enough is by far my biggest uncertainty.

14. Do you believe in the Singularity?

I think that, if civilization lasts long enough, then sure: eventually we might need to worry about the creation of an AI that is to us as we are to garden slugs, and about how to increase the chance that such an AI will be “friendly” to human values (rather than, say, converting the entire observable universe into paperclips, because that’s what it was mistakenly programmed to want).  Also, someday we might be able to transfer our consciousnesses into a computer cloud and live for billions of years in a simulated paradise.  I don’t know anything in the laws of math or physics to rule these things out, which is just another way of saying that for all I know, they’re possible!

I even support a few people spending their lives thinking about these possibilities.  I’m friendly with many of the people who do spend their lives that way; I enjoy talking to them when they pass through town (or when I pass through the Bay Area, where they congregate).  And maybe the work they’re doing on “AI safety” will have unexpected spin-off applications for the world of today—stranger things have happened.

One other thing: if you want me to rush to the Singularity community’s defense, the way to do it is to tell me that they’re a weirdo nerd cult that worships a high-school dropout and his Harry Potter fanfiction, so how could anyone possibly take their ideas seriously?  It’s not just the invalidity of the ad hominem argument that will turn my eyes red—rather, it’s that this particular kind of ad hominem (“these nerds violate our social norms, so we need not consider the truth or falsehood of what they say”) has had such an abysmal track record over the centuries.

Look, I’ve debated Eliezer Yudkowsky repeatedly; he and I have disagreed more often than we’ve agreed (of course, that’s partly a function of not needing to waste time on our many areas of agreement).  But Eliezer is obviously someone you read if you care about big questions!  And not only is it irrelevant to that determination whether he graduated high school, but for all it matters he could wear a Spiderman costume while smearing his arguments in watercolor.

Having said that, my own view is that, if our sorry civilization is to survive long enough for unfriendly AI to become the main concern, there are many other existential dangers that we’ll probably need to handle first.  Like, I dunno, global warming, running out of fresh water, nuclear-armed theocrats, the constant backsliding on this Enlightenment business?  In which case, working directly on AI safety might be like working directly on P vs. NP: why not start with “easier” challenges that are probably prerequisites anyway?

Speaking of which, when I look at the thrilling advances being achieved today in AI, I see all sorts of ethical issues that will need to be dealt with soon—like, how can a deep neural network justify to you why it turned down your loan application?  Should self-driving cars handle crashes using utilitarian or deontological ethics?  But these are all issues where we can try things out, learn from our mistakes, and iterate—arguably, the only way human beings have ever mastered anything.  And that gives these issues a very different character from the Singularity, which (it’s often stressed) we have only one chance to get right.

The trouble, I’d say, is that as a species, we have no idea how to get right things that we have only one chance to get right.  Of course, if we needed to get something right (say) ten years from now, we’d have no choice but to try anyway.  But crucially, my Singularity friends’ estimated timescales for developing a human-level AI have always struck me as … on the aggressive side.  It’s not like I have an alternate timescale, or even a probability distribution over timescales, about which I’d profess more confidence.  It’s just that the uncertainties strike me as so large, right now, that I don’t see how we can profitably use our estimates to guide our actions.  So for example, whatever research I might do on friendly AI, how could I know that it wasn’t actually increasing the probability of an AI cataclysm—for example, by uncovering the secrets of AI too early, or by giving the world a false sense of security?  (It’s analogous to the old question: even if you agree in principle with Pascal about his Wager, how do you know you’re not praying to the wrong god and thereby bringing down hellfire on yourself?)  This isn’t just one-size-fits-all skepticism: rather, it’s specific to problems where I have neither rigorous math nor empirical data to show me what I’m doing wrong and how to improve for next time.

Anyway, these are the reasons why, even though I completely agree that a Singularity is possible, it probably doesn’t crack the top ten of the things that keep me awake at night.

15. Do you believe in free will?

By definition, if you knew the complete state of the universe, you could use it to calculate everything I’d do in the future.  (Or at least, calculate the exact probabilities for everything I could do, which seems equally constraining for me!)  In that sense, everything we do today was determined—or at least, probabilistically determined—by the universe’s state at the Big Bang, and that’s necessarily true regardless of any actual facts about what kind of world we live in.

On the other hand, precisely because this sort of determinism holds in any imaginable universe, my view is that it’s not a determinism that has “fangs,” or that could credibly threaten any notion of free will worth talking about.  To put it bluntly, I don’t care if God knows my future choices, unless God’s knowledge can somehow be made manifest in the physical world, and used to predict my choices!

For me, the interesting questions about free will all concern whether, as a being within the universe, you could know the state of the universe in enough detail to predict what I’d do: whether, for example, you could get all the information about my brain without killing me in the process.  For again, my choices are clearly determined by something physical.  But it follows that “determinism” can’t possibly be the relevant issue, because it’s too trivial: the real questions concern someone else’s ability to know what you’re going to do before you do it.

Thus, suppose it were possible, in some remote future, to upload yourself to Google’s cloud, make an unlimited number of copies of your brain state, run them forward or backward, and use the copies to predict exactly what you’d do at a given time in response to which stimulus (or even just the probability that you’d do one thing versus another).  I’m not talking about using some fMRI scan to guess which button you’re going to push a few seconds in advance, two-thirds of the time: I’m talking about nearly-perfect, science-fiction levels of accuracy.

In that case, it’s hard for me to see what more science could say in favor of “free will not existing”!  And yes, I know there are people who passionately defend the position that, even if a computer in the next room perfectly predicted everything they would do before they did it, “they’d still have free will,” because what does the computer in the other room have to do with anything?  But to me, that just seems like a failure to carry the thought experiment to its logical conclusion.  The issue is that, by purely behavioral standards (let’s say, the standards of the Turing Test), the computer has every bit as much right to be called “you” as the flesh-and-blood version does!  Whatever is the most the deepest, intimate thing you’ll ever say—a declaration of love for your spouse, whatever—the computer (by assumption) would say it in just the same way, so that your spouse couldn’t even tell which one they were talking to.

In short, it seems to me that the prediction machine would demolish many people’s carefully-crafted two-state solution, where your choices are “predictable in theory, but not in practice,” so it’s “just as if you have free will” even if you don’t, and we can all go home happy.  In a world with prediction machines, your choices would be predictable in practice, and it wouldn’t seem as if you had free will.  Everything you did could be fully traced to causal antecedents external to you, plus pure randomness—not in some philosophical imagination, but for real, and on a routine basis.  So rather than torture words, why not simply admit that in this world, free will would’ve been unmasked as an illusion?

And conversely: if scanning my brain state, duplicating it like computer software, etc. were somehow shown to be fundamentally impossible, then I don’t know what more science could possibly say in favor of “free will being real”!

A few years ago, I wrote a long essay on these issues called “The Ghost in the Quantum Turing Machine.”  There, I took the position that we simply don’t know yet to what extent you can scan, copy, and predict something like a human brain without destroying its state: it’s an open empirical question.  On the one hand, quantum mechanics’ No-Cloning Theorem says that you can’t make an exact copy of an unknown physical system—and even a microscopic detail that you missed could in principle get chaotically amplified, and could completely change someone’s behavior.  On the other hand, my Singularity friends expect that all the information in a brain that’s relevant to cognition will be stored in macroscopic degrees of freedom—like the strengths and connection patterns of synapses—that we could easily imagine the nanotechnology of the far future scanning and copying to whatever accuracy is needed.  So, I hope progress in science and engineering teaches us more—just like progress in physics, biology, math, and other fields shifted the grounds of other philosophical debates that had once seemed ethereal.

(As I once saw it wonderfully put: after Gödel’s Theorem, all the different camps of mathematical philosophers were still at the table, but at least they all needed to reshuffle their cards!  We can hope that scientific progress will cause a similar card-reshuffling among the various free-will camps.)

In summary, I’d say that you can define “free will” in boring, tautological ways where we either obviously have it or obviously don’t, with no need to leave our armchairs and study the world!  But there’s also an interesting, fruitful way to define “free will”—as an in-principle unpredictability of some of our choices by external agents, going beyond the merely probabilistic—where it’s not known today whether we have that or not, but conceivably the science of the future could tell us.  And that seems like a point worth appreciating.

16. What’s your utopia?

Since I hang out with Singularity people so much, part of me reflexively responds: “utopia” could only mean an infinite number of sentient beings living in simulated paradises of their own choosing, racking up an infinite amount of utility.  If such a being wants challenge and adventure, then challenge and adventure is what it gets; if nonstop sex, then nonstop sex; if a proof of P≠NP, then a proof of P≠NP.  (Or the being could choose all three: it’s utopia, after all!)

Over a shorter time horizon, though, maybe the best I can do is talk about what I love and what I hate.  I love when the human race gains new knowledge, in math or history or anything else.  I love when important decisions fall into the hands of people who constantly second-guess themselves and worry that their own ‘tribe’ might be mistaken, who are curious about science and have a sense of the ironic and absurd.  I love when society’s outcasts, like Alan Turing or Michael Burry (who predicted the subprime mortgage crisis), force everyone else to pay attention to them by being inconveniently right.  And whenever I read yet another thinkpiece about the problems with “narrow-minded STEM nerds”—how we’re basically narcissistic children, lacking empathy and social skills, etc. etc.—I think to myself, “then let everyone else be as narrow and narcissistic as most of the STEM nerds I know; I have no further wish for the human race.”

On the other side, I hate the irreversible loss of anything—whether that means the deaths of individuals, the burning of the Library of Alexandria, genocides, the flooding of coastal cities as the earth warms, or the extinction of species.  I hate when the people in power are ones who just go with their gut, or their faith, or their tribe, or their dialectical materialism, and who don’t even feel self-conscious about the lack of error-correcting machinery in their methods for learning about the world.  I hate when kids with a passion for some topic have that passion beaten out of them in school, and then when they succeed anyway in pursuing the passion, they’re called stuck-up, privileged elitists.  I hate the “macro” version of the same schoolyard phenomenon, which recurs throughout cultures and history: the one where some minority is spat on and despised, manages to succeed anyway at something the world values, and is then despised even more because of its success.

So, until the Singularity arrives, I suppose my vision of utopia is simply more of what I love and less of what I hate!

AddendumAfter I posted the Q&A, Aaronson emailed me the following clarification regarding the end of science:

Incidentally, you say that I disagree with you about the end of science, but that’s only partly true.  I actually think you're more right than most scientists are willing to admit about how much of the science presented as “revolutionary” today consists of confirmations, minor tweaks, or applications of theories from the early 20th century or earlier that have remained stable since that time.

Having said that, I also think:

- Math (and its cousin computer science) are infinite, and are about as healthy as one would expect given their infinitude.

- Just in the fields that I know something about, NP-completeness, public-key cryptography, Shor’s algorithm, the dark energy, the Hawking-Bekenstein entropy of black holes, and holographic dualities are six examples of fundamental discoveries from the 1970s to the 1990s that seem able to hold their heads high against almost anything discovered earlier (if not quite relativity or evolution).

- If civilization lasts long enough, then there’s absolutely no reason why there couldn’t be further discoveries about the natural world as fundamental as relativity or evolution.  One possible example would be an experimentally-confirmed theory of a discrete structure underlying space and time, which the black-hole entropy gives us some reason to suspect is there.  Another example would be a discovery of extraterrestrial life, and/or a theory that successfully explained how common life is in our universe.  But of course, I have no idea whether we’ll survive long enough for any of these things to happen, just like I don’t know if we’ll survive long enough to prove P≠NP.

Here I’m setting aside the merely personal/emotional aspects, of hoping you’re wrong, and of directing my own energies toward parts of science where your thesis feels more wrong to me than it does elsewhere!

Further Reading:

See my Q&As with Sabine Hossenfelder, Steven Weinberg, George Ellis, Carlo Rovelli, Edward Witten, Garrett Lisi, Paul Steinhardt, Lee Smolin, Eliezer Yudkowsky, Stuart Kauffman, Christof Koch and Rupert Sheldrake.

Meta-post: Horgan Posts on Brain and Mind Science.

Meta-Post: Horgan Posts on Physics, Cosmology, Etc.