Would you rather be plausible but dull, or implausible but fascinating? Robin Hanson has made his choice. He is, technically, an economist, with positions at George Mason University and the Future of Humanity Institute at Oxford University, but he is really a big-ideas guy. On his blog overcomingbias.com (to which his fellow AI visionary Eliezer Yudkowsky once contributed), Hanson says, “I am addicted to ‘viewquakes,’ insights which dramatically change my world view.” His latest book, The Age of Em: Work, Love and Life when Robots Rule the Earth, envisions bizarre consequences of advances in artificial intelligence. The book is getting admiring reviews even from those who find his extrapolations improbable.  The Wall Street Journal calls Hanson’s book “not put-downable,” and The Guardian says Hanson’s “eschatological vision [is] worthy of Hieronymus Bosch.” Hanson recently agreed to give a talk at Stevens Institute of Technology on November 16. I suspect my students will love him. Below he answers questions about his book and other topics. –John Horgan

Horgan: You started in physics, ended up in economics. Why?

Hanson: Initially I had no career plans; I just kept studying topics to answer questions I had, and switched when new questions seemed more interesting. But eventually I thought I had things to say, and realized I’d need contacts and credentials to be heard. At the time I had a hobby of designing social institutions, such as contracts for buying medical treatment, and I knew many interesting topics could be framed as economics, so I tried to turn my hobby into a career. I was quite lucky to succeed. 

Horgan: Where do you fall on the left/right, socialist/libertarian spectrum? And please don't say you transcend those categories.

Hanson: Emotionally I lean libertarian, as the romance of community via government never resonated much with me. But I’m not a fanatic. As an intellectual for whom tradition never carried much weight, I once leaned liberal. But as I’ve thought more about the distinction between our forager and farmer ancestors in values and attitudes, I’ve come to see more virtue in a conservatism that abstracts from many particulars of tradition. That is, an abstract forager-like liberalism focuses more on us all talking about how to adapt the universe more to our fixed natures, while an abstract farmer-like conservatism focuses more on individuals or smaller groups trying to change themselves so they can compete better against nature and rivals. As global coordination is very hard, competition will long continue, and those who won’t adapt to compete must decline.

Horgan: What biases do you struggle to overcome?

Hanson: I know abstractly that the world isn’t fair and doesn’t live up to its professed ideals in a great many ways, but I still get upset when that hurts me. (Such as when I contribute to intellectual progress, but that doesn’t count for academic prestige.) Also, it is hard not to take criticism of my ideas as criticism of me. 

Horgan: Are you one of those Bayes-theorem worshippers?

Hanson: Bayesian updating is a fine model of ideal belief change. If you find a consistent trend which your belief deviates from, you’ll probably get more belief accuracy by cutting those deviations. The main time such cutting is harmful is when accurate beliefs are harmful, as when some beliefs are socially desirable. 

Horgan: What’s the sound-bite version of The Age of Em? Is it a prediction or a thought experiment? 

Hanson: “Em” is short for “brain emulation.” The idea is to port the software in a specific human brain to new computer hardware. Today if you have a program running on an old computer, a program you want to have available on a new computer, one approach is to watch the old program, guess how it works, and then try to write software on the new computer that works they way you think the old program works. But another approach is to write an emulator on the new computer, that makes this new computer look like the old computer to the old software. You can do this without understanding how the old software works.

To do this for human brains, we will need three techs, none of which is good enough yet, but which probably will be within roughly a century. We’ll need cheap computers, brain scans with enough spatial and chemical resolution, and signal-processing models for all the kinds of brain cells in a brain. The “age of em” is the era after ems are cheap enough to displace humans on almost all jobs, and before the economy changes yet again to something new, I know not what.

My book tries to show that we can in fact study the future by just consistently applying standard theories. If we don’t know which disruptive techs will appear when, then we can define many such scenarios and apply theory to each one. (And we should set up betting markets to estimate which techs are how likely when.) I’ve taken one very specific such scenario, and showed just how much detail one can say about it. This scenario may not happen, but if the future is important enough to have one hundred books exploring scenarios, it is worth having a book on each scenario that has at least a one percent chance of happening. The em scenario meets that standard.

Horgan: Given that scientists can’t explain brains, why do you think engineers can “emulate” them?

Hanson: “Explain” isn’t all or nothing. We understand part but not all of a great many organ systems, such as muscle, bone, blood, and skin. We also understand a great deal about many systems that exchange signals with brains, such as eyes, ears, hands, and mouths. We have even created and fielded functional replacements for many of these systems. An “em” is an artificial system that replaces the signal-processing function of a brain. We can create an em by only understanding how each cell type processes signals--we don’t need to understand why that works at higher levels of organization. 

Horgan: Our desires are rooted in biology. Where would the desires—if any--of artificial intelligences come from?

Hanson: When we write programs directly, we can explicitly encode their desires. But ems inherit their desires from the spaghetti code that evolution gave humans. So ems have mostly the same range of desires as humans, even when those desires are no longer functional. 

Horgan: Do you yearn to be an em? 

Hanson: Most ems will be copies descended from a few hundred humans that are most productive in the em world. I'm too old even now to have a chance of starting one of these successful clans. But still, I’d love to see their world, and have a better chance at immortality. 

Horgan: What do you think about all the chatter about “reality” being a simulation?

Hanson: I doubt I’m living in a simulation, because I doubt the future is that interested in simulating us; we spend very little time today doing any sort of simulation of typical farming or forager-era folks, for example. But I’ll note that we mostly like such topics to stretch our thinking, not because we take them seriously. For example, we've had decades of talk about ems, in terms of, are they possible, when, would they be conscious, would they become? But I was the first and mostly the only person to try to carefully analyze their social consequences. Similarly, we’ve had decades of talk on are we living in a simulation, is that possible, what are the chances, what clues might let us know. But I’m the only one who has analyzed the consequences, namely how you should live your life differently due to the chance you might be in a simulation. If people took these things seriously, more people would care about the consequences.

Horgan: I’ve given economist Tyler Cowen a hard time for touting the economic benefits of war. Why was I wrong?

Hanson: Pointing out examples of situations where there was clearly too much war doesn’t settle the issue, nor does pointing out the clear and often big costs of war. Tyler points to plausible compensating benefits, which suggest there might often be too little war. His claim is far from obvious, but it is also far from crazy. On the whole, I believe him. 

Horgan: Are you worried about AI being militarized?

Hanson: I worry about war, and the fact that the possibility of new war tech forces all sides to spend to develop new weapons, just to keep up with rivals. But to worry about any particular tech, I need to believe that this tech, when adopted, tends to make wars more frequent or harmful. I just don’t see that for AI yet. 

Horgan: My colleagues Lee Vinsel and Andrew Russell, who do science and technology studies, argue that innovation is overrated and maintenance of stuff we have underrated. Comment?

Hanson: It is certainly possible in some particular time and place for maintenance to be underrated and innovation overrated. I’d love to have betting markets on such questions, to leave them to those who know more on them than I. But still, the more that the future matters, the more that innovation matters relative to maintenance. Innovation accumulates over a long run, and pretty much all growth we’ve ever seen has come from innovation. In contrast, the benefits of maintenance decay more quickly with time. However, I will say that invention is very much overrated relative to all the other contributors to innovation, such as diffusion.

Horgan: What’s your utopia? 

Hanson: My personal utopia would be an intellectual world where we actually lived up to most of the intellectual ideals we espouse. Where work is judged mainly on the long term benefit it gives the world, and arguments are accepted no matter how unpalatable their conclusions, or whose ox is gored. I actually think we know a lot about how to construct such a utopia if we wanted - see my work on futarchy and idea futures. The main problem seems to be that most of us don’t actually want my “utopia."

Further Reading:

See my Q&As with Eliezer Yudkowsky, Scott Aaronson, Philip Tetlock, Sabine Hossenfelder, Steven Weinberg, George Ellis, Carlo Rovelli, Edward Witten, Garrett Lisi, Paul Steinhardt, Lee Smolin, Stuart Kauffman, Christof Koch, Sheldon Solomon and Rupert Sheldrake.

Is "Social Science" an Oxymoron? Will That Ever Change?

The Singularity and the Neural Code.

Do Big New Brain Projects Make Sense When We Don't Even Know the "Neural Code"?

Two More Reasons Why Big Brain Projects Are Premature.

Artificial brains are imminent… not!

Why You Should Care about Pentagon Funding of Obama's BRAIN Initiative.

Bayes's Theorem: What's the Big Deal?

Are Brains Bayesian?

Can the Singularity Solve the Valentine's Day Dilemma?