Skip to main content

What the History of Math Can Teach Us about the Future of AI

Doomsayers say it will put us all out of work, but experience suggests otherwise

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Whenever an impressive new technology comes along, people rush to imagine the havoc it could wreak on society, and they overreact. Today we see this happening with artificial intelligence (AI). I was at South by Southwest last month, where crowds were buzzing about Elon Musk’s latest hyperbolic claim that AI poses a far greater danger to humanity than nuclear weapons. Some economists have similarly sounded alarms that automation will put nearly half of all jobs in the U.S. at risk by 2030. The drumbeat of doomsaying has people spooked: a Gallup/Northeastern study published in March found that about three out of four Americans are convinced that AI will destroy more jobs than it creates.

My reading of the history of technology and my decades of work on its frontiers make me skeptical of such claims. Major shifts in technology—and AI does have the potential to be that—inevitably take longer than people typically imagine to transform our jobs and lives. So societies have time to apply regulations, cultural pressures and market forces that shape how that transformation happens. We’re making those kinds of adjustments today with social media technology, for example.

The long history of automation in mathematics offers an even more apt parallel to how computerization, in the form of AI and robots, is likely to affect other kinds of work. If you’re worried about AI-induced mass unemployment or worse, think about this: why didn’t digital computers make mathematicians obsolete?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The word “computer” was, for centuries, a job title. From the 1600s onward, human computers did calculations—initially by pen and paper—to create navigational tables, accounting ledgers and the like. By the 1960s, the workers had slide rules and mechanical calculators to help them, but these jobs were still around. NASA relied heavily on flesh-and-blood computers, like Katherine Johnson and her team of African-American women, to do calculations for the early space missions, as recounted in the 2016 feature film Hidden Figures.

Today, a smart watch can add and subtract numbers billions of times faster than any human being. So you might assume that NASA has no need for human computers in the 21st century.

But you’d be wrong. The programmers, mathematicians and computational physicists working for NASA now far outnumber the human computers employed at the agency in the 1960s. Despite a billion-fold increase in the capability of the machines, human jobs weren’t lost—they multiplied. The reason that happened tells us a lot about intelligence, both human and artificial.

It turns out that human intelligence is not just one trick or technique—it is many. Digital computers excel at one particular kind of math: arithmetic. Adding up a long column of numbers is quite hard for a human, but trivial for a computer. So when spreadsheet programs like Excel came along and allowed any middle-school child to tot up long sums instantly, the most boring and repetitive mathematical jobs vanished.

But mathematical problems come in many varieties, and many of the most economically important problems are very difficult and time-consuming for even the most advanced computers. To tackle problems like that, you need lots of clever mathematicians and computational scientists who can think up ways to program computers to do those calculations as efficiently as possible.

This situation is a classic example of something that the innovation doomsayers routinely forget: in almost all areas where we have deployed computers, the more capable the computers have become, the wider the range of uses we have found for them. It takes a lot of human effort and jobs to satisfy that rising demand.

A general rule in economics is that a big increase in the supply of a commodity causes prices to fall because demand is fixed. Yet this hasn’t applied to computer power—especially for mathematics. Huge increases in supply have counterintuitively stimulated demand for more because each boost in raw computational ability and each clever new software algorithm opens another class of problems to computer solution. But only with human help.

Theorists have proved that some mathematical problems are actually so complicated that they will always be challenging or even impossible for computers to solve. So at least for now, people who can push forward the boundary of computationally hard problems need never fear for lack of work.

This tells us something important about AI. Like mathematics, intelligence is not just one simple kind of problem, such as pattern recognition. It’s a huge constellation of tasks of widely differing complexity. So far, the most impressive demonstrations of “intelligent” performance by AI have been programs that play games like chess or Go at superhuman levels. These are tasks that are so difficult for human brains that even the most talented people need years of practice to master them.

Meanwhile, many of the tasks that seem most basic to us humans—like running over rough terrain or interpreting body language—are all but impossible for the machines of today and the foreseeable future. As AI gets more capable, the sphere of jobs that computers can do faster or more accurately than people will expand. But an expanding universe of work will remain for humans, well outside the reach of automation.