As the fall semester ends, I’m brooding once again over the contradictions of teaching “critical thinking,” especially as applied to science. Below is an edited version of an essay I wrote for The Chronicle of Higher Education when I was in a similar mood. –John Horgan
Don't always believe what scientists and other authorities tell you! Be skeptical! Think critically! That's what I tell my students, ad nauseam. And some learn the lesson too well.
I want to give my students the benefit of my hard-won knowledge of science's fallibility. Early in my career, I was a conventional science writer, easily impressed by scientists' claims. Fields such as physics, neuroscience, genetics and artificial intelligence seemed to be bearing us toward a future in which bionic superhumans would zoom around the cosmos in warp-drive spaceships. Science was an "endless frontier," as physicist Vannevar Bush, a founder of the National Science Foundation, put it in 1945.
Doubt gradually undermined my faith. Scientists and journalists, I realized, often presented the public with an overly optimistic picture of science. By relentlessly touting scientific "advances"—from theories of cosmic creation and the origin of life to the latest treatments for depression and cancer—and by overlooking all the areas in which scientists were spinning their wheels, we made science seem more potent and fast-moving than it really is.
Now, I urge my students to doubt the claims of physicists that they are on the verge of explaining the origin and structure of the cosmos. Some of these optimists favor string and multiverse theories, which cannot be confirmed by any conceivable experiment. This isn't physics any more, I declare in class, it's science fiction with equations!
I give the same treatment to theories of consciousness, which attempt to explain how a three-pound lump of tissue—the brain—generates perceptions, thoughts, memories, emotions and self-awareness. Some enthusiasts assert that scientists will soon reverse-engineer the brain so thoroughly that they will be able to build artificial brains much more powerful than our own.
Balderdash! I tell my classes (or words to that effect). Scientists have proposed countless theories about how the brain absorbs, stores and processes information, but researchers really have no idea how the brain works. And artificial-intelligence advocates have been promising for decades that robots will soon be as smart as HAL or R2-D2. Why should we believe them now?
Maybe, just maybe, I suggest, fields such as particle physics, cosmology and neuroscience are bumping up against insurmountable limits. The big discoveries that can be made have been made. Who says science has to solve every problem?
Lest my students conclude that I'm some solitary crank, I assign them articles by other skeptics, including a dissection of epidemiology and clinical trials by journalist Gary Taubes in The New York Times. He advises readers to doubt dramatic claims about the benefits of some new drug or diet, especially if the claim is new. "Assume that the first report of an association is incorrect or meaningless," Taubes writes, because it probably is. "So be skeptical."
To drive this point home, I assign articles by John Ioannidis, an epidemiologist who has exposed the flimsiness of most peer-reviewed research. In a 2005 study, he concluded that "most published research findings are false." He and his colleagues contend that "the more extreme, spectacular results (the largest treatment effects, the strongest associations, or the most unusually novel and exciting biological stories) may be preferentially published." These sorts of dramatic claims are also more likely to be wrong.
The cherry on this ice-cream sundae of doubt is a critique by psychologist Philip Tetlock of expertise in soft sciences, such as politics, history, and economics. In his 2005 book Expert Political Judgment, Tetlock presents the results of his 20-year study of the ability of 284 "experts" in politics and economics to make predictions about current affairs. The experts did worse than random guessing, or "dart-throwing monkeys," as Tetlock puts it.
Like Ioannidis, Tetlock found a correlation between the prominence of experts and their fallibility. The more wrong the experts were, the more visible they were in the media. The reason, he conjectures, is that experts who make dramatic claims are more likely to get air time on CNN or column inches in The Washington Post, even though they are likelier to be wrong.
For comic relief, I tell my students about a maze study, cited by Tetlock, that pitted rats against Yale undergraduates. Sixty percent of the time, researchers placed food on the left side of a fork in the maze; otherwise the food was placed randomly. After figuring out that the food was more often on the left side of the fork, the rats turned left every time and so were right 60 percent of the time. Yale students, discerning illusory patterns of left-right placement, guessed right only 52 percent of the time. Yes, the rats beat the Yalies! The smarter you are, the more likely you may be to "discover" patterns in the world that aren't actually there.
So how do my students respond to my skeptical teaching? Some react with healthy pushback, especially to my suggestion that the era of really big scientific discoveries might be over. "On a scale from toddler knowledge to ultimate enlightenment, man's understanding of the universe could be anywhere," wrote a student named Matt. "How can a person say with certainty that everything is known or close to being known if it is incomparable to anything?"
Other students embrace skepticism to a degree that dismays me. Cecelia, a biomedical-engineering major, wrote: "I am skeptical of the methods used to collect data on climate change, the analysis of this data, and the predictions made based on this data." Pondering the lesson that correlation does not equal causation, Steve questioned the foundations of scientific reasoning. "How do we know there is a cause for anything?" he asked.
In a similar vein, some students echoed the claim of radical postmodernists that we can never really know anything for certain, and hence that almost all our current theories will probably be overturned. Just as Aristotle's physics gave way to Newton's, which in turn yielded to Einstein's, so our current theories of physics will surely be replaced by radically different ones.
After one especially doubt-riddled crop of papers, I responded, "Whoa!" (or words to that effect). Science, I lectured sternly, has established many facts about reality beyond a reasonable doubt, embodied by quantum mechanics, general relativity, the theory of evolution, the genetic code. This knowledge has yielded applications—from vaccines to computer chips—that have transformed our world in countless ways. It is precisely because science is such a powerful mode of knowledge, I said, that you must treat new pronouncements skeptically, carefully distinguishing the genuine from the spurious. But you shouldn't be so skeptical that you deny the possibility of achieving any knowledge at all.
My students listened politely, but I could see the doubt in their eyes. We professors have a duty to teach our students to be skeptical. But we also have to accept that, if we do our jobs well, their skepticism may turn on us.