Skip to main content

Statistical Flaw Punctuates Brain Research in Elite Journals

Neuroscientists need a statistics refresher. That is the message of a new analysis in Nature Neuroscience that shows that more than half of 314 articles on neuroscience in elite journals   during an 18-month period failed to take adequate measures to ensure that statistically significant study results were not, in fact, erroneous.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Neuroscientists need a statistics refresher.

That is the message of a new analysis in Nature Neuroscience that shows that more than half of 314 articles on neuroscience in elite journals during an 18-month period failed to take adequate measures to ensure that statistically significant study results were not, in fact, erroneous. Consequently, at least some of the results from papers in journals like Nature, Science, Nature Neuroscience and Cell were likely to be false positives, even after going through the arduous peer-review gauntlet.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The problem of false positives appears to be rooted in the growing sophistication of both the tools and observations made by neuroscientists. The increasing complexity poses a challenge to one of the fundamental assumptions made in statistical testing, that each observation, perhaps of an electrical signal from a particular neuron, has nothing to do with a subsequent observation, such as another signal from that same neuron.

In fact, though, it is common in neuroscience experiments—and in studies in other areas of biology—to produce readings that are not independent of one another. Signals from the same neuron are often more similar than signals from different neurons, and thus the data points are said by statisticians to be clustered, or "nested." To accommodate the similarity among signals, the authors from VU University Medical Center and other Dutch institutions suggest that a technique called multilevel analysis is needed to take the clustering of data points into account.

No adequate correction was made in any of the 53 percent of the 314 papers that contained clustered data when surveyed in 2012 and the first half of 2013. "We didn't see any of the studies use the correct multi-level analysis," says Sophie van der Sluis, the lead researcher. Seven percent of the studies did take steps to account for clustering, but these methods were much less sensitive than multi-level analysis in detecting actual biological effects. The researchers note that some of the studies surveyed probably report false-positive results, although they couldn't extract enough information to quantify precisely how many. Failure to statistically correct for the clustering in the data can increase the probability of false-positive findings to as high as 80 percent—a risk of no more than 5 percent is normally deemed acceptable.

Jonathan D. Victor, a professor of neuroscience at Weill Cornell Medical College had praise for the study, saying it "raises consciousness about the pitfalls specific to a nested design and then counsels you as to how to create a good nested design given limited resources."

Emery N. Brown, a professor of computational neuroscience in the department of brain and cognitive sciences at the MIT-Harvard Division of Health Sciences and Technology, points to a dire need to bolster the level of statistical sophistication brought to bear in neuroscience studies. "There's a fundamental flaw in the system and the fundamental flaw is basically that neuroscientists don't know enough statistics to do the right things and there's not enough statisticians working in neuroscience to help that."

The issue of reproducibility of research results has preoccupied the editors of many top journals in recent years. The Nature journals have instituted a checklist to help authors on reporting on the methods used in their research, a list that inquires about whether the statistical objectives for a particular study were met. (Scientific American is part of the Nature Publishing Group.) The one clear message from studies like that of van der Sluis and others is that the statistician will take on an increasingly pivotal role as the field moves ahead in deciphering ever more dense networks of neural signaling.

Image Source: Zache

Gary Stix, the neuroscience and psychology editor for Scientific American, edits and reports on emerging advances that have propelled brain science to the forefront of the biological sciences. Stix has edited or written cover stories, feature articles and news on diverse topics, ranging from what happens in the brain when a person is immersed in thought to the impact of brain implant technology that alleviates mood disorders like depression. Before taking over the neuroscience beat, Stix, as Scientific American's special projects editor, oversaw the magazine's annual single-topic special issues, conceiving of and producing issues on Einstein, Darwin, climate change and nanotechnology. One special issue he edited on the topic of time in all of its manifestations won a National Magazine Award. Stix is the author with his wife Miriam Lacob of a technology primer called Who Gives a Gigabyte: A Survival Guide to the Technologically Perplexed.

More by Gary Stix