Stephen Wolfram seems to see himself as Newton upgraded with programming chops and business savvy, but it’s not hubris if you back it up. As he points out on his website, he published papers on particle physics in his mid-teens, earned a Ph.D. in physics from Caltech when he was 20 and won a MacArthur “genius” grant at 22. In his late 20s he invented and began successfully marketing Mathematica, software for automating calculations. Wolfram contends that Wolfram Language—which underpins Mathematica and Wolfram|Alpha, a knowledge engine he released in 2009—represents a “new paradigm for computation” that will enable humans and machines to “interact at a vastly richer and higher level than ever before.” This vision dovetails with the theme of Wolfram’s 2002 opus A New Kind of Science, which argues that simple computer programs, like those that generate cellular automata, can model the world more effectively than traditional mathematical methods. Physicist Steven Weinberg called the book an interesting “failure,” and other scientists griped that Wolfram had rediscovered old ideas. Critics have also accused Wolfram of hyping his computational products.* Yet Wolfram, when I saw him speak last fall at “Ethics of Artificial Intelligence,” exuded confidence, suggesting how Wolfram Language might transform law and politics. We recently had the following email exchange.–-John Horgan

Horgan: Can you summarize, briefly, the theme of A New Kind of Science? Are you satisfied with the book’s reception?

Wolfram: It’s about studying the computational universe of all possible programs and understanding what they can do.  Exact science had been very focused on using what are essentially specific kinds of programs based on mathematical ideas like calculus.  My goal was to dramatically generalize the kinds of programs that can be used as models in science, or as foundations for technology and so on.

The big surprise, I suppose, is that when one just goes out into the computational universe without any constraints, one finds that even incredibly simple programs can do extremely rich and complex things.  And a lot of the book is about understanding the implications of this for science.

I’ve been very happy with the number and diversity of people who I know have read the book.  There’ve been thousands of academic papers written on the basis of it, and there’s an increasing amount of technology that’s based on it.  It’s quite amazing to see how the idea of using programs as models in science has caught on.  Mathematical models dominated for three centuries, and in a very short time, program-based models seem to have become the overwhelming favorites for new models.

When the book came out, there was some fascinating sociology around it.  People in fields where change was “in the air” seemed generally very positive, but a number of people in fields that were then more static seemed to view it as a threatening paradigm shift.  Fifteen years later that shift is well on its way, and the objections originally raised are beginning to seem bizarre.  It’s a pity social media weren’t better developed in 2002, or things might have moved a little faster.

Horgan: Can the methods you describe in A New Kind of Science answer the question of why there is something rather than nothing?

Wolfram: Not that I can see so far.

Horgan: Can they solve "the hard problem"? That is, can they explain how matter can become conscious?

Wolfram: One of the core discoveries that I discussed in the book is what I call the Principle of Computational Equivalence—which implies that a very wide range of systems are equivalent in their computational sophistication.  And in particular, it means that brains are no more computationally sophisticated than lots of systems in nature, and even than systems with very simple rules.  It means that “the weather has a mind of its own” isn’t such a primitive thing to say: the fluid dynamics of the weather is just as sophisticated as something like a brain.

There’s lots of detailed history that makes our brains and their memories the way they are.  But there’s no bright line that separates what they’re doing from the “merely computational.” There are many philosophical implications to this.  But there are also practical ones.  And in fact this is what led me to think something like Wolfram|Alpha would be possible.

Horgan: The concept of computation, like information, presupposes the existence of mind. So when you suggest that the universe is a computer, aren't you guilty of anthropomorphism, or perhaps deism (assuming the mind for whom the computation is performed is God)?

Wolfram: The concept of computation doesn’t in any way presuppose the existence of mind... and it’s an incorrect summarization of my work to say that I suggest “the universe is a computer.”

Computation is just about following definite rules.  The concept of computation doesn’t presuppose a “substrate,” any more than talking about mathematical laws for nature presupposes a substrate.  When we say that the orbit of the Earth is determined by a differential equation, we’re just saying that the equation describes what the Earth does; we’re not suggesting that there are little machines inside the Earth solving the equation. 

About the universe: yes, I have been investigating the hypothesis that the universe follows simple rules that can be described by a program.  But this is just intended to be a description of what the universe does; there’s no “mechanism” involved.  Of course, we don’t know if this is a correct description of the universe.  But I consider it the simplest hypothesis, and I hope to either confirm or exclude it one day.

Horgan: What's the ultimate purpose of the Wolfram Language? Can it fulfill Leibniz's dream of a language that can help us resolve all questions, moral as well as scientific? Can it provide a means of unambiguous communication between all intelligent entities, whether biological or artificial?

Wolfram: My goal with the Wolfram Language is to have a language in which computations can conveniently be expressed for both humans and machines—and in which we’ve integrated as much knowledge about computation and about the world as possible.  In a way, the Wolfram Language is aimed at finally achieving some of the goals Leibniz had 300 years ago.  We now know—as a result of Gödel’s theorem, computational irreducibility, etc.—that there are limits to the scientific questions that can be resolved.  And as far as moral questions are concerned: well, the Wolfram Language is going in the direction of at least being able to express things like moral principles, but it can’t invent those; they have to come from humans and human society.

Horgan: Are autonomous machines, capable of choosing their own goals, inevitable? Is there anything we humans do that cannot—or should not—be automated?

Wolfram: When we see a rock fall, we could say either that it’s following a law of motion that makes it fall, or that it’s achieving the “goal” of being in a lower-potential-energy state.  When machines—or for that matter, brains—operate, we can describe them either as just following their rules, or as “achieving certain goals.”  And sometimes the rules will be complicated to state, but the goals are simpler, so we’ll emphasize the description in terms of goals.

What is inevitable about future machines is that they'll operate in ways we can't immediately foresee.  In fact, that happens all the time already; it's what bugs in programs are all about.  Will we choose to describe their behavior in terms of goals?  Maybe sometimes.  Not least because it'll give us a human-like context for understanding what they're doing.

The main thing we humans do that can't meaningfully be automated is to decide what we ultimately want to do.

Horgan: What is the most meaningful goal that any intelligence, human or inhuman, can pursue?

Wolfram: The notion of a “meaningful goal” is something that relies on a whole cultural context—so there can’t be a useful abstract answer to this question.

Horgan: Have you ever suspected that God exists, or that we live in a simulation?

Wolfram: If by “God” you just mean something beyond science: well, there’s always going to be something beyond science until we have a complete theory of the universe, and even then, we may well still be asking, “Why this universe, and not another?”

What would it mean for us to “live in a simulation”?  Maybe that down at the Planck scale we’d find a whole civilization that’s setting things up so our universe works the way it does.  Well, the Principle of Computational Equivalence says that the processes that go on at the Planck scale—even if they’re just “physics” ones—are going to be computationally equivalent to lots of other ones, including ones in a “civilization.”  So for basically the same reason that it makes sense to say “the weather has a mind of its own,” it doesn’t make any sense to imagine our universe as a “simulation.”

Horgan: What's your utopia?

Wolfram: If you mean: what do I personally want to do all day?  Well, I’ve been fortunate that I’ve been able to set up my life to let me spend a large fraction of my time doing what I want to be doing, which usually means creating things and figuring things out.  I like building large, elegant, useful, intellectual and practical structures---which is what I hope I’ve done over a long period of time, for example, with Wolfram Language. 

If you’re asking what I see as being the best ultimate outcome for our whole species---well, that’s a much more difficult question, though I’ve certainly thought about it.  Yes, there are things we want now---but how what we want will evolve after we’ve got those things is, I think, almost impossible for us to understand.  Look at what people see as goals today, and think how difficult it would be to explain many of them to someone even a few centuries ago.  Human goals will certainly evolve, and the things people will think are the best possible things to do in the future may well be things we don’t even have words for yet.

Further Reading:

*See critical reviews of A New Kind of Science by Scott Aaronson and Cosma Shalizi.

See Q&As with Steven Weinberg, George Ellis, Carlo Rovelli, Edward Witten, Scott Aaronson, Sabine Hossenfelder, Priyamvada Natarajan, Garrett Lisi, Paul Steinhardt, Lee Smolin, Robin Hanson, Eliezer Yudkowsky, Stuart Kauffman, Christof Koch, Rupert Sheldrake and Sheldon Solomon.

How Would AI Cover an AI Conference?

Can Engineers and Scientists Ever Master "Complexity"?

So Far, Big Data Is Small Potatoes

Is "Social Science" an Oxymoron? Will That Ever Change?