Skip to main content

Two More Reasons Why Big Brain Projects Are Premature

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


In a recent post I raised doubts about two big brain-mapping projects, one in the U.S. (to which Obama just committed $100 million) and the other in Europe. I suggested that these projects might be premature, given our basic ignorance of how brains make minds. I'd like to provide two addendums to my post, which provoked some blowback, including a rant from Henry Markram, conceiver of Europe's Human Brain Project. (I responded to Markram in a Post-postscript.)

First addendum: Some critics of my criticism pointed out that my arguments against the brain initiatives could be arguments for them. In other words, big, coordinated programs could help neuroscience advance not only by boosting funding but also by encouraging sharing of data and theories, development of common methods and terminology and so on. I asked for a response to this point from a critic of the U.S. brain initiative, Donald Stein, a neuroscientist at Emory University. He replied:

"We won WWII with a big organized (more or less) collaborative project. So, some of them do work. It's really about the concepts and paradigms that underlie this particular project. This notion of mapping the circuitry goes back to the middle of the 19th century, and the localizationist paradigm these folks are applying is basically the same albeit with some better equipment. They completely ignore the multiple levels of organization, signaling and functions that are ever changing---not to mention no mention of the tremendously important role of all the trillions of glial cells also hanging around in the brain with no specificity of connections but with huge effects on brain dynamic. So, it's not about big science, its about good (or bad) science. As Americans we love to think we can just throw technology at all the worlds problems and all will be well. But at its best, the technology should follow the concept(s) and not the other way around. Hope this helps, Don."


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Second addendum: My Scientific American colleague Gary Stix just blogged on a new study, in Nature Reviews Neuroscience, that casts doubt on the reliability of published neuroscience findings. The study's seven authors include epidemiologist John Ioannidis, who over the past decade or so has uncovered profound flaws in peer-reviewed reports in biomedicine and other fields. (See Ioannidis's 2011 Scientific American article "An Epidemic of False Claims.") The report by Ioannidis and other researchers (the lead author is Katherine Button of the University of Bristol) claims that many neuroscience results lack statistical significance and hence may be false or unreplicable.

The report states that "the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles."

This finding bolsters the argument that the Big Brain Projects--by funneling precious resources toward paradigms supported by flimsy findings--are premature.

Image: https://www.rsc.org/chemistryworld/