Skip to main content

In praise of scientific error

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


My wife has first dibs on our New Yorker each week, so I only just got around to reading Jonah Lehrer’s piece on the scientific method in last week's issue, which has been getting so much attention from my fellow science writers. John Horgan calls it a "bombshell" and Charlie Petit a "must-read."

Lehrer describes how many, or even most, published scientific papers prove to be wrong. In a range of examples from biomedicine and psychology, Lehrer tells of a "decline effect." The discovery paper does all the right statistical tests and infers a significant result. Follow-up studies reproduce the result, but find a lower statistical significance. A few rounds later, scientists conclude the discovery was a fluke.

It's certainly a thought-provoking essay, but I'm not sure what to take away from it. As Horgan points out, it has a certain bait-and-switch quality to it. At first, the anecdotes intimate that the decline effect is an objective phenomenon, as though nature is changing its mind; only as the story unfolds does Lehrer attribute the effect to scientists' own biases.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Lehrer finds this "disturbing," and his (or his editors') subhead asks, "Is there something wrong with the scientific method?" Few who are familiar with science would deny that the process has its flaws (on which more later), but the fallibility of published papers is hardly one of them. Almost by definition, a discovery is at the limits of our ability to perceive it, so it is easily confounded with statistical flukes. The only way to tell is to publish the discovery, invite others to replicate it, and let it play out. The difficulties Lehrer describes do not signal a failing of the scientific method, but a triumph: our knowledge is so good that new discoveries are increasingly hard to make, indicating that scientists really are converging on some objective truth.

As Petit points out, Lehrer doesn't talk much about the physical sciences, apart from alluding to the excitement in the 1980s over a possible fifth force of nature. Measurements of gravity in mines, boreholes, and tall buildings suggested that the gravitational constant, G, was about 1 percent larger over distances of hundreds of meters than in benchtop experiments. Moreover, objects of different composition seemed to respond in different ways to the force of gravity, which they shouldn't. Theorists suggested that an additional force of nature was operating. Over time, the evidence faded. The anomalous borehole measurements did not go away, per se, but were explained as an uneven distribution of mass within Earth’s interior.

When I read Lehrer’s passing mention of this incident, I couldn't quite tell what he was getting at. Either he is arguing that the gravitational anomalies were an example of the decline effect, or that physicists are clinging to the law of gravity despite evidence to the contrary. If he intends the latter, whoa. Even when the anomalies were making the news, most theorists saw the law of gravity as so rock-solid—representing the distillation of countless measurements of falling apples and orbiting planets—that any deviations would have to come from some additional force. It seems to me that the process worked exactly as it should. Physicists were alive to the possibility our current theories might be wrong, looked for anomalies, studied those they found, and ended up confirming our current theories. What Lehrer takes as an example of "slipperiness of empiricism" is the exact opposite.

If anything, I’m surprised that such incidents are not more common than they are. Physicists would like nothing more than to find new physics. That way lies immortality. Evidently, their eagerness is balanced by caution. In fact, it may well be overbalanced—a fear of being wrong may sometimes strangle good ideas in their cradle.

I just read Kathryn’s Schulz’s book Being Wrong: Adventures in the Margins of Error, which I would recommend to all my fellow perfectionists. It persuasively argues that error is the flip side of creativity. The fear of making mistakes paralyzes us—we shy away from taking risks and deny errors when we do make them. Science, one might hope, is the one human endeavor that has come to terms with our mortal fallibility. The very word "experiment" connotes a risk; papers are filled with "mays" and "mights"; error bars quantify the potential for wrongness.

And yet scientists still worry about overcaution. Young scientists, especially, can be afraid of asking questions or delving into foundational questions for fear they’ll be thought stupid or loco—with good reason. Granting agencies such as the National Science Foundation and National Institutes of Health are notoriously conservative, and as an article in this week's Economist lays out, academic jobs are hard to come by.

The history of science suggests that mistakes are not to be ashamed of, but to be embraced. Even wrong ideas contribute to progress. Einstein probably erred when he thought quantum mechanics was incomplete, but was the first to appreciate the phenomenon of quantum entanglement. In 2004, physicist Edward Witten published a paper that sought to develop an exotic version of string theory. It didn't make much headway, but opened the door to new ideas about how space and time might emerge from deeper physics. More broadly, string theory might be wrong, but oh boy is it an amazing theory—a rich vein of insights and spinoffs that has yet to exhaust itself. Theories that try to explain the universe without dark matter may well prove wrong, but identify patterns in galactic structure that dark-matter models will need to explain.

I'm sensitized to this issue at the moment because I've taken flack from some particle physicists for publishing Garrett Lisi and Jim Weatherall's article in our December issue. Lisi's ideas for unifying physics are certainly out there; even he admits it. But what persuaded us to publish the article was that, even if those ideas are wrong, they are illuminating—if not for physicists themselves, then at least for aficionados who keep hearing about geometric concepts such as Lie groups and crave an explanation. Whatever theory ultimately unifies physics, it will need to explain the intricate regularities in particle properties that Lisi's article describes.

So how do scientists encourage productive risk-taking? Several conferences I've been to have had brainstorming sessions where scientists were encouraged to spill whatever was on their minds and not worry about being judged. Blogs and online preprints tap into the substratum of hunches, musings, and null results that lie just below the public surface of peer-reviewed papers. Last week I talked with Ijad Madisch, co-founder of ResearchGATE, a science networking website. He says he set up the site to allow scientists to share their unpublished or unpublishable ideas and learn from one another’s mistakes.

Science is not received wisdom, but informed guesswork. It may well be wrong. That's life. Besides, what's the alternative? To substitute our own gut feelings for scientific analysis, flawed though it may be? We should always be willing to question the outcomes of science, but we should be even more willing to question ourselves.

Image credit: iStockphoto