Skip to main content

The perils of hindsight judgment

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Paul Meehl was renowned for many things: his insistence on statistical and research rigor; his prescient views on schizophrenia; his advancements in psychotherapy; his creation of one of the scales of the Minnesota Multiphasic Personality Inventory (MMPI—one of the most widely used tests of personality in clinical research and practice.) He is equally famous for his aversion to academic conferences. “We never see Dr. Meehl at a case conference,” whines one of Meehl’s hypothetical students. “Why is this?” This presumptive lament is the instigation for one of Meehl’s most widely cited papers, “Why I do not attend case conferences.”

While Meehl’s response is multifold, it mostly boils down to one overarching point: he hates the countless biases and fallacies that he encounters at such meetings. (As a side-note, Nobel laureate Daniel Kahneman cites Meehl as a major inspiration for his own work on heuristics and biases.) These include such gems as the buddy-buddy syndrome (“intelligent, educated, sane, rational persons seem to undergo a kind of intellectual deterioration when they gather around a table in one room”); the Multiple Napoleons fallacy (“Well, it may not be ‘real’ to us, but it’s ‘real’ to him.” So what if he thinks he’s Napoleon?); the Uncle George’s Pancakes fallacy (“A patient does not like to throw away leftover pancakes and he stores them in the attic. A mitigating clinician says, ‘Why, there is nothing so terrible about that—I remember good ole Uncle George from my childhood, he used to store uneaten pancakes in the attic’”); and the aptly-named crummy criterion fallacy (“Many clinical psychology trainees (and some full professors) persist in a naive undergraduate view of psychometric validity”). They also include something that I’ve been reminded of all too frequently in the last few weeks: a tendency for clinicians to argue that they had known the outcome of a case all along. The way it turned out? A foregone conclusion.

Maybe clinicians really are prescient in their areas of expertise? Not so, says Meehl. They only say they knew after the fact, as if their prior beliefs had been eclipsed by selective retrograde amnesia.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Meehl didn’t have a catchy name for the phenomenon; he was merely irritated by it. But a few years after he published his thoughts, a graduate student who chanced upon his paper in a Hebrew University seminar became intrigued. Could it be, wondered Baruch Fischhoff, that this was a pervasive bias in our judgment and decision making—and one that Daniel Kahneman and Amos Tversky, who were then developing the entire field of biases and heuristics, had yet to unearth? It could, and it was. Fischhoff would devote much of the next thirty years to developing the idea that has since become known as hindsight bias, our perfect, 20/20 vision for the past—and the related belief that what has become obvious in retrospect was obvious all along.

In his original work on the effect, Fischhoff called it creeping determinism, as a nod to historian Georges Florovsky’s argument that “the tendency toward determinism is somehow implied in the method of retrospection itself. In retrospect, we seem to perceive the logic of the events which unfold themselves in a regular or linear fashion according to a recognizable pattern with an alleged inner necessity.” In a series of three studies, Fischhoff had one group of participants read 150-word descriptions of either historical or clinical events, followed by four possible outcomes. A second group received the same description—along with one additional sentence that called out one of the four outcomes as the “true” one. All subjects then had to estimate the likelihood of each outcome’s occurrence, assuming the conclusion was not yet known, as well as evaluate the relevance of each data point in the description.

In each case, individuals who had been told that one of the outcomes had actually happened ranked it as more likely to happen—and over 70% of the participants who had been told that one outcome was true ranked that outcome as more likely than their non-informed counterparts. In other words, the mere indication that it had happened made it appear more probable to begin with.

What’s more, these subjects then selectively picked out those parts of the event description that supported the known conclusion as more relevant than other data points. Even people who were explicitly instructed to ignore any outcome knowledge—that is, to pretend they didn’t have any evidence of what would happen—and to make their decisions as rationally as possible were completely unable to do so. Their actual knowledge prevented them from being able to form unbiased opinions – but they remained oblivious to its effect, thinking that they had performed admirably.

In a fourth study, Fischhoff tested his insights on a real world event, Nixon’s visits to China and the USSR. Before the trips took place, he asked participants to estimate the likelihood of various events that could take place, such as Nixon meeting Mao or visiting Lenin’s tomb. He then contacted everyone a second time, two weeks to six months after the visits had occurred, and asked them two questions: what, to the best of their knowledge, was their original prediction about each event, and did they believe that it had actually taken place?

Fischhoff found that people had an awfully hard time with accurate recall. Instead, they gave retroactively higher probabilities to events they thought had occurred, and retroactively lower probabilities to those events that they thought had not in fact taken place—all the while remaining convinced that they were accurately reporting on their own prior judgments.

The same results have since been replicated many times, in many contexts. All point to the same conclusions: we are very bad at remembering our own past judgments, and equally terrible at maintaining any semblance of objectivity when faced with known outcomes.

Those conclusions, it seems to be, are especially relevant as we seek to assign blame in the wake of the Boston Marathon bombings. In watching the media pick apart the role of the FBI and Homeland Security in anticipating—or rather, failing to anticipate—the events of that Monday, I can’t help but feel an intense sense of déjà vu: after 9/11, we heard the exact same stories of what the intelligence community should and should not have known, and who failed when to inform whom and conclude what based on the available intelligence.

It happens time and time again: when something terrible occurs, be it a bombing or a school shooting or a co-worker or mother or father who suddenly snaps, we pick apart the “telltale” warning signs and ask, should we have known? Should someone have known? Actually, we tend to phrase that sentiment as more of a statement: we should have known. Someone should have known. Why didn’t anyone do anything when they still could? It doesn’t matter if the anyone is a psychologist who should have seen evidence of dangerous instability, or an intelligence agency that should have picked out just the right piece of data at the right moment, or a friend who should have realized something was off. They should have known. It all seemed so obvious. And somehow, they missed it.

What we fail to realize in these moments of blame-giving is just how biased our state of knowledge really is. Everything seems clear in retrospect—but in the moment, what can you really tell amidst all the noise? Intelligence analysis is exceedingly difficult. Psychoanalysis and cognitive therapy are exceedingly difficult. Any time a continuous stream has to be picked apart and analyzed in the moment, it is exceedingly difficult. For each true signal, there is endless static. And before the fact? The signals are not nearly as clear as they seem in retrospect.

Still, we rush to condemn, to talk about communication failures and analysis failures and whatever-else failures. And through it all, we assign greater blame to experts than we do to anyone else—they had to have known; they’re experts after all!—and judge them all the more harshly. As Kahneman points out, when experts with the best information turn out to be wrong, the things they failed to do “often appear almost inevitable after they occur.” What was a “reasonable gamble” now looks like a “foolish mistake.”

And yes, failures certainly existed, in the case of the marathon, during 9/11, during every intelligence “failure” that has been analyzed and re-analyzed to death—but the story is never as clear-cut as we would like it to be. I’m not arguing that we stop asking questions, pushing government agencies and private professionals alike to do their utmost to improve effectiveness, always. I’d just like for us to wipe the smugness off our faces, the certainty from our headlines, the recriminations from our thinking. Everything is always far muddier than it appears to be from the benefit of hindsight. And no matter how fair we think we’re being in our judgments, it’s safe to assume that we’re not, really. Not in the least. It’s easy enough to gauge the future from the past, and awfully hard to discern it in the present.

Sources

Fischhoff, B. (1975). Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1 (3), 288-299 DOI: 10.1037//0096-1523.1.3.288

Fischhoff, B. (2007). An Early History of Hindsight Research Social Cognition, 25 (1), 10-13 DOI: 10.1521/soco.2007.25.1.10

Guilbault, R., Bryant, F., Brockway, J., & Posavac, E. (2004). A Meta-Analysis of Research on Hindsight Bias Basic and Applied Social Psychology, 26 (2-3), 103-117 DOI: 10.1080/01973533.2004.9646399

Kahneman, D., & Riepe, M. (1998). Aspects of Investor Psychology The Journal of Portfolio Management, 24 (4), 52-65 DOI: 10.3905/jpm.1998.409643

 

Image credit: Pancake pile, rcstanley Flickr, Creative Commons license. Nixon and Mao: Public Domain, via Wikimedia Commons. Rear mirror: Tim J. Keegan, Flickr, Creative Commons license.

Maria Konnikova is a science journalist and professional poker player. She is author of the best-selling books The Biggest Bluff (Penguin Press, 2020), The Confidence Game (Viking Press, 2016) and Mastermind: How to Think Like Sherlock Holmes (Viking Press, 2013).

More by Maria Konnikova