Skip to main content

Chasing Laplace's Demon

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Warming up - Here's your task: Come up with a stock market portfolio in the next 30 seconds (if you already have a portfolio, pretend you don't for the sake of this exercise). It's OK. I'll wait.

Now I'm sure that wasn't exactly easy, but I'm going to venture a guess here and say (assuming you're not a market analyst or any other kind of investment expert) that your strategy consisted mainly in thinking of the biggest, most well-known corporations you could. Google? Apple? Facebook? Dow Chemical? Plus, there's a good chance you allocated your funds evenly across the companies you picked. What else can a person without expertise really do, after all?

You likely have yet to be surprised here, but things get interesting when we find that this kind of uninformed decision-making can actually outperform the sophisticated portfolios of investment experts, mutual funds, and the like. Somehow, people with less information relevant to the decision at hand often (but not always) do better than the pros.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


To understand how this happens, we can look to the work of groups like the Center for Adaptive Behavior and Cognition (ABC), which aims to understand how and why simple strategies using limited information, time, and processing power actually turn out to be highly effective in many contexts. In our stock market example, it's the recognition and 1/N heuristics we have to thank, but I'll come back to that topic later.

The "rational" stance - "Why smart people are stupid" is the paradoxical question Jonah Lehrer aims to answer in his article by the same name. The piece summarizes and comments on this paper from the Journal of Personality and Social Psychology, which presents two principal findings: greater intelligence or "cognitive sophistication" does not seem to free people from classic decision-making biases (like base rate neglect and the anchoring bias), and greater intelligence actually seems to increase susceptibility to the bias blind spot, the tendency to not detect biases in one's own behavior.

This is an interesting if controversial finding worthy of further investigation (though one might question why being more intelligent necessarily carries with it greater capacities for critical evaluation of one's own behavior...I leave that for you to mull over), but what's important for the present discussion is the framework in which Lehrer interprets the study. Consider this passage from his article:

When people face an uncertain situation, they don't carefully evaluate the information or look up relevant statistics. Instead, their decisions depend on a long list of mental shortcuts, which often lead them to make foolish decisions.

And again look at the article's title - "Why smart people are stupid" - and consider its implications. Could there be a much more explicit condemnation of "biased" behavior, of these "mental shortcuts"? It would seem not, but is that reputation really deserved?

Why smart people are

stupid biased - There are no doubt countless cases where being "biased", utilizing mental shortcuts or heuristics, can get you into trouble (read the example here, for instance), but heuristics can also do us plenty of good. Let's return to the stock market example now. Here is a case where a "biased" behavior - simply picking stocks based on the big-name companies that one easily recognizes - is actually beneficial. So what's going on?

What non-experts often rely on in this case is a combination of the recognition and 1/N heuristics. The first is a mental shortcut that effectively says: "If you need to decide which of two things is better on some criterion, and you've heard of one but not the other, go with the one you've heard of." It works in cases like the stock market scenario (and many others) when there is a dovetailing of the simple recognition process and the attribute being guessed (here, the future profitability of the investment). This can happen for stocks (at least in a bull market) when the more recognizable a company is, the more likely it is that it's big and profitable. The 1/N heuristic is another simple rule, in which one evenly distributes resources across alternatives (in the case of stocks, this entails splitting your money evenly across investments, but it shows up in other domains, such as how parents divide their time between children). Despite the rule's simplicity it has actually been demonstrated to outperform many other sophisticated prediction models.

Now recognition, the 1/N rule, and other heuristics clearly won't work in just any situation, and an important task for researchers studying them is to understand what defines the decision environments in which a given heuristic will prove useful. But this domain-specificity is precisely in line with the "ecological rationality" stance taken by the ABC research center and other proponents of heuristics research. The idea is that evolution equipped humans with an adaptive toolbox - a set of problem-solving rules of thumb - tuned to the various challenges posed by our ancestral environments. We can be ecologically rational by deploying the right tool in the appropriate situation. The advantage of this set of rules is their simplicity and robustness; they can - when appropriately applied - perform more quickly and accurately than general purpose mechanisms across a wide variety of environments.

Cool it, Laplace - My discussion of heuristics research here is necessarily brief, but the point I hope to convey is that showing a "bias" by using a mental shortcut like the recognition heuristic shouldn't be taken as the mark of irrationality, and shouldn't be assumed to be a path to "foolish decisions". Yes, heuristics will typically fail us when deployed in the wrong environment, but most of the time humans are pretty good at using the right mental tool at the right time. We are, to our cores, boundedly rational. Evolution may have equipped us with some pretty impressive machinery in the brain department, but even with billions of neurons we can't weigh very many of the myriad possible combinations of options and their consequences for most of the decisions we face every day. Instead we are built to operate with limited time, processing power, and available information. To imagine a truly rational human being, one unbounded by these constraints, is to imagine nothing short of a Laplacean demon. For those unfamiliar with the term, it references the hypothetical super-intelligence proposed by French mathematician Pierre-Simon Laplace, capable of knowing the state of every atom in the universe, and associated with the first published description of scientific determinism.

Would Lehrer or anyone else really argue that humans are truly and completely rational agents, unmolested by the finiteness of time or their own intellects? Of course not; to claim so would be beyond ludicrous. But here's the rub: When we equate "irrational" heuristic decision-making with stupidity, when we look at the deployment of our adaptive toolboxes as failures, we are deeply misunderstanding what cognition is for. We are imagining an infinitely rational Laplacean demon as the ideal for what a human ought to be, all the while forgetting that evolution doesn't push organisms towards some picture of rational perfection, but rather to solve real problems faced by the environment. What our brains evolved to do is find food and mates, avoid predators, protect and raise offspring, and meet other adaptive challenges. To think we are meant to be truly rational, and to then cite biased decision-making as failures to meet that ideal, is to strike down a straw man. It's akin to claiming optical illusions are failures of the visual system to build an accurate model of the world. Sure, our eyes and minds may not get things perfectly right, but to expect that is to miss the point. Vision is to guide our movement through and interaction with the world, not to build a copy of the world in our heads. Similarly, our decision-making capacities are not built for perfect, rational analyses, but rather for making quick, accurate-enough decisions that capitalize on the structure of the environment.

Lehrer's article makes it clear that the myth of humans as unboundedly rational agents lives on. And it's not just in folk psychology and popular science; a long tradition of this kind of thinking still permeates academic research in psychology, economics, and other fields. It's time we were honest with ourselves about what it is to be human. The sooner we can stop chasing Laplace's demon, the sooner we can start understanding how human rationality really works.

If you want to learn more about heuristics research...

I suggest checking out a good review paper, this book for a more formal treatment, or this one for something with a PopSci flavor.

References cited:

Boyd M. (2001). On ignorance, intuition and investing: a bear market test of the recognition heuristic. J. Psychol. Finan. Market. 2, 150-56

DeMiguel, V., Garlappi, L., Uppal, R. (2009). Optimal Versus Naive Diversification: How Inefficient is the 1/N Portfolio Strategy? Rev. Financ. Stud., 22(5), 1915-1953.

Gigerenzer, G. (2007). Gut Feelings: The Intelligence of the Unconscious. Penguin Books, USA.

Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual review of psychology, 62, 451-482.

Gigerenzer, G., & Todd, P. M. (1999). Simple heuristics that make us smart. Oxford University Press, USA.

Lehrer, J. (2012, June 12). Why Smart People are Stupid. The New Yorker. Retrieved from http://www.newyorker.com/online/blogs/frontal-cortex/2012/06/daniel-kahneman-bias-studies.html.

Ortmann A, Gigerenzer G, Borges B, Goldstein DG. (2008). The recognition heuristic: a fast and frugal way to investment choice? In Handbook of Experimental Economics Results: Vol. 1 (Handbooks in Economics No. 28), ed. CR Plott, VL Smith, pp. 993-1003. Amsterdam.

Pachur, T., Todd, P. M., Gigerenzer, G., Schooler, L. J., & Goldstein, D. G. (2011). The recognition heuristic: a review of theory and tests. Frontiers in Psychology, 2:147

West, R. F., Meserve, R. J., & Stanovich, K. E. (2012). Cognitive Sophistication Does Not Attenuate the Bias Blind Spot. Journal of Personality and Social Psychology. 103(3). 506-519.

Author's note: My choice to write on this topic was made independent of Lehrer's recent media attention. I only selected his article because it's a current and accessible manifestation of a broader issue I wanted to discuss.

Jared Lorince is a third-year PhD student at Indiana University. His research focuses on how people search for information in Web environments, and he's especially interested in collaborative tagging systems and the decision-making strategies people use when deciding what and how to tag. He is also co-founder of the blog Motivate.Play, which explores issues at the intersection of the social sciences and games.

More by Jared Lorince