Skip to main content

Too Hard for Science? Simulating the Human Brain

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Supercomputers may soon approach the brain's power, but much is unknown about how it works

In "Too Hard for Science?" I interview scientists about ideas they would love to explore that they don't think could be investigated. For instance, they might involve machines beyond the realm of possibility, such as particle accelerators as big as the sun, or they might be completely unethical, such as lethal experiments involving people. This feature aims to look at the impossible dreams, the seemingly intractable problems in science. However, the question mark at the end of "Too Hard for Science?" suggests that nothing might be impossible.

The scientist:Luis Bettencourt, a research scientist at Los Alamos National Laboratory and professor at the Santa Fe Institute.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The idea: The brain is the most powerful computer we know of, "and understanding it is one of the ultimate challenges in science," Bettencourt says. "It's what makes humans special. We want to know what it does and how it works."

The human brain has approximately 100 billion neurons with roughly one quadrillion (one million billion) connections wiring these cells together, with each connection or synapse typically firing at about 10 times per second. Nevertheless, the most advanced computers to date are now almost powerful enough to model it, Bettencourt explains.

For instance, the visual cortex of the human brain is estimated to operate at roughly one petaflop — one quadrillion floating point operations per second. The most powerful supercomputers the world has ever seen are now capable of petascale performance — the fastest right now, the Tianhe-1A system in China, is capable of a maximum performance of 2.57 petaflops, and Blue Waters, which is expected to come online this year at the University of Illinois, might not only have a peak performance of up to 10 petaflops, but it should be capable of a sustained performance of at least one petaflop.

"Beyond petascale computing, the U.S. Department of Energy already has plans for exascale computing, which is another thousand times faster," Bettencourt says. "We maybe could see that by 2018 or 2020."

At least two groups are currently attempting to simulate the human brain, including one led by Dharmendra Modha at IBM Almaden Research Center and another at Ecole Polytechnique Fédérale de Lausanne led by Henry Markram. "Simulating how the human brain works could shed light on how to address cognitive disorders," Bettencourt says. "Ultimately, this could shed light on how our consciousness works."

The problem: A major challenge that scientists face when using supercomputers is breaking down a problem they want to solve in a way that makes the most of how the machines are designed. Supercomputers nowadays grow in power by linking many thousands of processors together, with each processor handling a piece of a puzzle. However, not all problems lend themselves well to such a strategy, including those requiring each part of a puzzle to communicate with many other parts. This means simulating a complex network of densely connected nodes — for instance, the human brain — is very difficult.

"Complexity in networks is a very hard problem at the moment, but one we can bypass to some extent through processing power," Bettencourt says. "Sufficiently fast computing can simulate such a complex network. It's just an energetically less efficient solution. What the brain can do on, say, 20 to 30 watts, a supercomputer working on the petascale needs megawatts to do."

In addition, simple models describing each synapse as firing about 10 times per second fail to capture the full complexity of the real system. "Certainly to describe a biological neuron or synapse, you need to do a lot more computation," Bettencourt says.

Still, the much greater challenge with simulating the human brain lies in how much remains unknown about how it works. "It's clear we don't have a model or theory of anything similar to what humans can do," Bettencourt says. "If we ask the best computer algorithms to look at natural objects or images to identify them, they approach a success rate of 70 to 90 percent. That may sound good, but if you're crossing streets and you only have a 90 percent chance of identifying a car, you won't live long."

The ability to simulate a human brain also raises many of the moral and ethical questions that surround artificial intelligence. "There's the issue you have of creating artificial life or consciousness, or of creating a simulation of a human brain that is designed to serve you," Bettencourt says. "There's also the fact that you might conceivably make it go faster and more powerful than our brains, for the usual sci-fi scenario of them taking over."

One might also argue that attempting to simulate consciousness "is to chip away at the mystery of our humanity, and that perhaps is something too hard or too dangerous to do," Bettencourt notes.

The solution? Before one attempts to simulate the human brain, it makes sense to model simpler brains, such as those of insects. Another approach is to simulate parts of the human brain instead of the whole — indeed, Bettencourt and his colleagues are working on simulating the human visual cortex.

To understand vision, one has to go beyond just simulating neurons and synapses, Bettencourt notes. "Vision requires a lot of feedback — the brain attends to specific objects and features such as faces, zeroing others out, moving from one to another, creating higher and higher representations of what scenes are, all mostly subconsciously," he says. "So we have to make sure we have to right theory and model of the mechanisms of vision."

Also, the brain might take in a trillion pixels of information a day and thousands of times more over a lifetime, "and no one has tried training a computer model of the brain with that much data," he adds. "We'll try using photos on Flickr and videos on YouTube to give computers longer and longer visual experiences and see how they improve — to give them, in other words, a period of development somewhat like what infants have."

"The more knowledge we have from neuroscience on how the brain works, the more we can create systems-level models, test them on computers, tinker on them and test them again, for a continuous loop of experimentation and progress," Bettencourt says. "That will make a big difference."

Image of Luis Bettencourt: Santa Fe Institute

 

*

If you have a scientist you would like to recommend I question, or you are a scientist with an idea you think might be too hard for science, e-mail me at toohardforscience@gmail.com

Follow Too Hard for Science? on Twitter by keeping track of the #2hard4sci hashtag.

About the Author:Charles Q. Choi is a frequent contributor to Scientific American. His work has also appeared in The New York Times, Science, Nature, Wired, and LiveScience, among others. In his spare time he has traveled to all seven continents. Follow him on Twitter@cqchoi.

The views expressed are those of the author and are not necessarily those of Scientific American.

Charles Q. Choi is a frequent contributor to Scientific American. His work has also appeared in The New York Times, Science, Nature, Wired, and LiveScience, among others. In his spare time, he has traveled to all seven continents.

More by Charles Q. Choi