ADVERTISEMENT
  About the SA Blog Network













Guest Blog

Guest Blog


Commentary invited by editors of Scientific American
Guest Blog HomeAboutContact

Too Hard for Science? Simulating the Human Brain

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Supercomputers may soon approach the brain’s power, but much is unknown about how it works

In "Too Hard for Science?" I interview scientists about ideas they would love to explore that they don’t think could be investigated. For instance, they might involve machines beyond the realm of possibility, such as particle accelerators as big as the sun, or they might be completely unethical, such as lethal experiments involving people. This feature aims to look at the impossible dreams, the seemingly intractable problems in science. However, the question mark at the end of "Too Hard for Science?" suggests that nothing might be impossible.

The scientist: Luis Bettencourt, a research scientist at Los Alamos National Laboratory and professor at the Santa Fe Institute.

The idea: The brain is the most powerful computer we know of, "and understanding it is one of the ultimate challenges in science," Bettencourt says. "It’s what makes humans special. We want to know what it does and how it works."

The human brain has approximately 100 billion neurons with roughly one quadrillion (one million billion) connections wiring these cells together, with each connection or synapse typically firing at about 10 times per second. Nevertheless, the most advanced computers to date are now almost powerful enough to model it, Bettencourt explains.

For instance, the visual cortex of the human brain is estimated to operate at roughly one petaflop — one quadrillion floating point operations per second. The most powerful supercomputers the world has ever seen are now capable of petascale performance — the fastest right now, the Tianhe-1A system in China, is capable of a maximum performance of 2.57 petaflops, and Blue Waters, which is expected to come online this year at the University of Illinois, might not only have a peak performance of up to 10 petaflops, but it should be capable of a sustained performance of at least one petaflop.

"Beyond petascale computing, the U.S. Department of Energy already has plans for exascale computing, which is another thousand times faster," Bettencourt says. "We maybe could see that by 2018 or 2020."

At least two groups are currently attempting to simulate the human brain, including one led by Dharmendra Modha at IBM Almaden Research Center and another at Ecole Polytechnique Fédérale de Lausanne led by Henry Markram. "Simulating how the human brain works could shed light on how to address cognitive disorders," Bettencourt says. "Ultimately, this could shed light on how our consciousness works."

The problem: A major challenge that scientists face when using supercomputers is breaking down a problem they want to solve in a way that makes the most of how the machines are designed. Supercomputers nowadays grow in power by linking many thousands of processors together, with each processor handling a piece of a puzzle. However, not all problems lend themselves well to such a strategy, including those requiring each part of a puzzle to communicate with many other parts. This means simulating a complex network of densely connected nodes — for instance, the human brain — is very difficult.

"Complexity in networks is a very hard problem at the moment, but one we can bypass to some extent through processing power," Bettencourt says. "Sufficiently fast computing can simulate such a complex network. It’s just an energetically less efficient solution. What the brain can do on, say, 20 to 30 watts, a supercomputer working on the petascale needs megawatts to do."

In addition, simple models describing each synapse as firing about 10 times per second fail to capture the full complexity of the real system. "Certainly to describe a biological neuron or synapse, you need to do a lot more computation," Bettencourt says.

Still, the much greater challenge with simulating the human brain lies in how much remains unknown about how it works. "It’s clear we don’t have a model or theory of anything similar to what humans can do," Bettencourt says. "If we ask the best computer algorithms to look at natural objects or images to identify them, they approach a success rate of 70 to 90 percent. That may sound good, but if you’re crossing streets and you only have a 90 percent chance of identifying a car, you won’t live long."

The ability to simulate a human brain also raises many of the moral and ethical questions that surround artificial intelligence. "There’s the issue you have of creating artificial life or consciousness, or of creating a simulation of a human brain that is designed to serve you," Bettencourt says. "There’s also the fact that you might conceivably make it go faster and more powerful than our brains, for the usual sci-fi scenario of them taking over."

One might also argue that attempting to simulate consciousness "is to chip away at the mystery of our humanity, and that perhaps is something too hard or too dangerous to do," Bettencourt notes.

The solution? Before one attempts to simulate the human brain, it makes sense to model simpler brains, such as those of insects. Another approach is to simulate parts of the human brain instead of the whole — indeed, Bettencourt and his colleagues are working on simulating the human visual cortex.

To understand vision, one has to go beyond just simulating neurons and synapses, Bettencourt notes. "Vision requires a lot of feedback — the brain attends to specific objects and features such as faces, zeroing others out, moving from one to another, creating higher and higher representations of what scenes are, all mostly subconsciously," he says. "So we have to make sure we have to right theory and model of the mechanisms of vision."

Also, the brain might take in a trillion pixels of information a day and thousands of times more over a lifetime, "and no one has tried training a computer model of the brain with that much data," he adds. "We’ll try using photos on Flickr and videos on YouTube to give computers longer and longer visual experiences and see how they improve — to give them, in other words, a period of development somewhat like what infants have."

"The more knowledge we have from neuroscience on how the brain works, the more we can create systems-level models, test them on computers, tinker on them and test them again, for a continuous loop of experimentation and progress," Bettencourt says. "That will make a big difference."

Image of Luis Bettencourt: Santa Fe Institute

 

*

If you have a scientist you would like to recommend I question, or you are a scientist with an idea you think might be too hard for science, e-mail me at toohardforscience@gmail.com

Follow Too Hard for Science? on Twitter by keeping track of the #2hard4sci hashtag.

About the Author: Charles Q. Choi is a frequent contributor to Scientific American. His work has also appeared in The New York Times, Science, Nature, Wired, and LiveScience, among others. In his spare time he has traveled to all seven continents. Follow him on Twitter @cqchoi.

The views expressed are those of the author and are not necessarily those of Scientific American.






Comments 13 Comments

Add Comment
  1. 1. denysYeo 11:01 pm 05/9/2011

    I enjoyed this article. Even though computer capability has really progressed to the point where we could actually consider the possibility of a real time simulation, we still don’t really understand what it is we are trying to simulate. Furthermore, computers – even working in parallel, are fairly homogenous machines; the brain is heterogeneous in structure. Different parts of the brain carry out different functions and interact with each other in complex ways. This “levels of processing”, approach, taken by the brain, could be the biggest challenge for linear processors to overcome in achieving anything like a “real” simulation of the brain.

    Link to this
  2. 2. fenzhong 1:58 pm 05/10/2011

    biology is the leading role in the 21 century

    Link to this
  3. 3. Raghuvanshi1 12:50 am 05/11/2011

    There are limitation of working of brain.We are human being. How can we compare brain with supercomputer?Why that great chess player was defeated against that computer, reason is simple computer is machine what you feed in he will predictably give accurate result.Human brain have limit but be remember computer is headless machine he never never invent new idea new discovery.Machine must remain control under us he must not be so powerful to devour us.Scientists must beware don`t invent such monster who can devour to whole mankind.Einstein invented atomic energy and world is constantly living in fear.Great artist and scientist Leonard da Vinci realized that development of his military engineering skills once a source of pride and ambition was a grotesque error.While he continued to fill his notebooks with diagrams and drawing and speculation .He wrote"I will not divulge such things because of evil nature of man.Can scientist learn any lesson from this great thinker?We did too much progress in science now enough is enough

    Link to this
  4. 4. DoctorRichard 4:11 pm 05/11/2011

    Consiousness appears to be picked up by the Brain, similar to the way a radio signal is picked up, and modified by a radio receiver. The electromagnetic energy is then reproduced as sound and we hear and understand it. It is a mistake to think that the brain is "manufacturing" consiousness,as an epiphenomenon. We know that when the brain is injured, that consiousness is effected, but this is not proof that it "makes" the awareness, only that it implements it, in some as yet unknown way. The brain has a huge number of Micro-tubuels that carry bound electrons. This array of bound electrons must create a field-gate, for some as yet unknown energy, that is then filtered through the brain, and with experience…we have personality. Consiousness, can be prior to even a sense of personality, as one can be aware..without defining the awareness in thought.

    Link to this
  5. 5. scilo 7:22 pm 05/11/2011

    I agree with Raghuvanshi. Nuclear, genetic, and other advances are inherently uncontrollable. Computer AI demands absolute controll.
    Our military, genetics, and cloning have already jumped ship.
    Just because we can is no reason to do it.
    Still,, our spirits tend to discovery. "If I don’t do it, someone else will".Witness how a young mind like Mr. Choi leads the way. He is no da Vinci.
    What are we becoming? Can we even control that?

    Link to this
  6. 6. Wayne Williamson 8:04 pm 05/11/2011

    The last part of the article is where the flaw is…our eyes and our senses may take in trillion of pieces of information a day(I think more)…most are discarded and can only be retrieve by remembering, which is just taking the remaining information and trying to rebuild it…filling in any gaps imperfectly…

    Link to this
  7. 7. mounthell 12:55 am 05/12/2011

    Computer-science people can’t be bothered to first gain insight into how the brain works before presuming to model "it." Aside from a surfeit of hubris and pretense, their implementation problem is that the functional architecture is alien. It’s amusing to hear them make noises about how their cranky new widgets will soon allow them to model their quest.

    When you don’t know where you’re going, any road will get there; faster devices will just get you nowhere faster. They should first do something useful, like model various cancer dynamics, and forget this useless brain sim nonsense.

    Link to this
  8. 8. DoctorRichard 1:16 am 05/12/2011

    I think people are missing the point here. In order to create a simulacrum to the human brain, we will have to produce something that is conscious. This is a huge task. Which of these words does not belong with the rest?: Flower cloud dog rose stone. A Dog has consciousness, while the others do not. Or does it? See the problem? Speed of calculation, even in exascales will not create consciousness. Consciousness arises when conditions are correct. More correctly, consciousness can express itself as an individual, when a brain exists that allows for its expression. The AI people are going ahead, against all objections, philosophical or otherwise and I applaud their efforts.

    Link to this
  9. 9. deike 12:52 pm 05/12/2011

    If computing speed was the main obstacle to solving the mysteries of the human mind, I might be encouraged by Dr. Bettencourt’s optimistic predictions.

    Unfortunately, simple processing speed – whether measured in peta, zetta, or yotta-flops – is not the problem. If it were, we would be able to simulate an interesting set of cognitive functions, but on an expanded time scale. This is not the case. Nothing that is currently happening in the field of AI, including Watson and "his" ilk, is remotely related to what actually happens in the brain.

    When I try to remember the name of my first grade teacher, I do not sort through a catalog of everyone I have ever known. Nor do I construct, in any practical sense, an SQL statement consisting of some convoluted set of nested ANDs, ORs and NOTs.

    I just "know" it.

    I realize claiming "I just ‘know’ it" sounds stunningly naive; as well it should. For naive is exactly what we are when it comes to understanding how a brain works. Further, this naivete is not limited to the human brain.

    Let’s consider a creature with a decidedly less complex set of neurons: the "lowly" Monarch butterfly, whose annual migration may take it across the Atlantic Ocean. Dr. Bettencourt could probably cobble together some computational wizardry that could do a reasonable job of emulating this feat; but could he do it with fewer than a million switches and then scale it down to the size of a grain of sand? Probably not. And don’t forget that that little million-switch marvel also has to handle all of the rest of the butterfly’s "thinking" and doing.

    The fallacy derives, at least in part, from the fact that neurons are not switches: they are not binary. In fact, they do not always respond in the same manner to the same stimulus. Some neurons are excitatory, others are inhibitory. Within these categories there is a staggering array of variations, many of which have yet to be studied. By the same token, neuroscientists are only now beginning to realize that glial cells, which account for almost half of the cells in the human brain, play more than a supporting role in cognition.

    It would be uncharitable to accuse the good doctor of being disingenuous; however, to imply that we are approaching the summit, when we have yet to establish a credible base camp, strikes me as a bit of stretch.

    I applaud Dr. Bettencourt’s efforts to make computers faster, cheaper and smaller, but let’s not pretend that such progress, no matter how technologically spectacular, adds significantly to our understanding of how brains work.

    Link to this
  10. 10. seanacoy 6:10 pm 05/13/2011

    The brain and the computer are fundamentally different. The computer runs on compartmentalized discrete digitalized data and functions. The brain is not even an analog machine – it operates on redundancy, inexactness, multiple path reinforcement, and the filteration or magnification of small events. While work with computers have begun to make available windows into digital processes that may provide useful "algorithms" for the brain as a black box, they remain simulations removed by many google degrees from the brain. Decades ago I thought of throwing electrical circuit components into a non-conductive "bath" and seeing how those that randomly connected handled inputs. While I wasn’t able to try it out, hasn’t someone been working from this end?

    Link to this
  11. 11. seanacoy 6:10 pm 05/13/2011

    The brain and the computer are fundamentally different. The computer runs on compartmentalized discrete digitalized data and functions. The brain is not even an analog machine – it operates on redundancy, inexactness, multiple path reinforcement, and the filteration or magnification of small events. While work with computers have begun to make available windows into digital processes that may provide useful "algorithms" for the brain as a black box, they remain simulations removed by many google degrees from the brain. Decades ago I thought of throwing electrical circuit components into a non-conductive "bath" and seeing how those that randomly connected handled inputs. While I wasn’t able to try it out, hasn’t someone been working from this end?

    Link to this
  12. 12. panax 5:19 am 09/27/2011

    Within these categories there is a staggering array of variations, many of which have yet to be studied.

    Link to this
  13. 13. panax 5:19 am 09/27/2011

    http://www.drmustafaerarslan.net

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Back To School

Back to School Sale!

12 Digital Issues + 4 Years of Archive Access just $19.99

Order Now >

X

Email this Article



This function is currently unavailable

X