Skip to main content

Intuitions, scientific methodology, and the challenge of not getting fooled.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


At Context and Variation, Kate Clancy has posted some advice for researchers in evolutionary psychology who want to build reliable knowledge about the phenomena they're trying to study. This advice, of course, is prompted in part by methodology that is not so good for scientific knowledge-building. Kate writes:

The biggest problem, to my mind, is that so often the conclusions of the bad sort of evolutionary psychology match the stereotypes and cultural expectations we already hold about the world: more feminine women are more beautiful, more masculine men more handsome; appearance is important to men while wealth is important to women; women are prone to flighty changes in political and partner preference depending on the phase of their menstrual cycles. Rather than clue people in to problems with research design or interpretation, this alignment with stereotype further confirms the study. Variation gets erased: in bad evolutionary psychology, there are only straight people, and everyone wants the same things in life. …

No one should ever love their idea so much that it becomes detached from reality.

It's a lovely post about the challenges of good scientific methodology when studying human behavior (and why it matters to more than just scientists), so you should read the whole thing.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Kate's post also puts me in mind of some broader issues about which scientists should remind themselves from time to time to keep themselves honest. I'm putting some of those on the table here.

Let's start with a quotable quote from Richard Feynman:

The first principle is that you must not fool yourself, and you are the easiest person to fool.

Scientists are trying to build reliable knowledge about the world from information that they know is necessarily incomplete. There are many ways to interpret the collections of empirical data we have on hand -- indeed, many contradictory ways to interpret them. This means that lots of the possible interpretations will be wrong.

You don't want to draw the wrong conclusion from the available data, not if you can possibly avoid it. Feynman's "first principle" is noting that we need to be on guard against letting ourselves be fooled by wrong conclusions -- and on guard against the peculiar ways that we are more vulnerable to being fooled.

This means we have to talk about our attachment to intuitions. All scientists have intuitions. They surely help in motivating questions to ask about the world and strategies for finding good answers to them. But intuitions, no matter how strong, are not the same as empirical evidence.

Making things more challenging, our strong intuitions can shape what we take to be the empirical evidence. They can play a role in which results we set aside because they "couldn't be right," in which features of a system we pay attention to and which we ignore, in which questions we bother to ask in the first place. If we don't notice the operation of our intuitions, and the way they impact our view of the empirical evidence, we're making it easier to get fooled. Indeed, if our intuitions are very strong, we're essentially fooling ourselves.

As if this weren't enough, we humans (and, by extension, human scientists) are not always great at recognizing when we are in the grips of our intuitions. It can feel like we're examining a phenomenon to answer a question and that we're refraining from making any assumptions to guide our enquiry, but chances are it's not a feeling we should trust.

This is not to say that our intuitions are guaranteed safe haven from our noticing them. We can become aware of them and try to neutralize the extent to which they, rather than the empirical evidence, are driving the scientific story -- but to do this, we tend to need help from people who have conflicting intuitions about the same bit of the world. This is a good methodological reason to take account of the assumptions and intuitions of others, especially when they conflict with our own.

What happens if there are intuitions about which we all agree -- assumptions we are making (and may well be unaware that we're making, because they seem so bleeding obvious) with which no one disagrees? I don't know that there are any such universal human intuitions. It seems unlikely to me, but I can't rule out the possibility. How would they bode for our efforts at scientific knowledge-building?

First, we would probably want to recognize that the universality of an intuition still wouldn't make it into independent empirical evidence. Even if it had been the case, prior to Galileo, or Copernicus, or Aristarchus of Samos, that every human took it as utterly obvious that Earth is stationary, we recognize that this intuition could still be wrong. As it happened, it was an intuition that was questioned, though not without serious resistance.

Developing a capacity to question the obvious, and also to recognize and articulate what it is we're taking to be obvious in order that we might question it, seems like a crucial skill for scientists to cultivate.

But, as I think comes out quite clearly in Kate's post, there are some intuitions we have that, even once we've recognized them, may be extremely difficult to subject to empirical test. This doesn't mean that the questions connected in our heads to these intuitions are outside the realm of scientific inquiry, but it would be foolish not to notice that it's likely to be extremely difficult to find good scientific answers to these questions. We need to be wary of the way our intuitions try to stack the evidential deck. We need to acknowledge that the very fact of our having strong intuitions doesn't count as empirical evidence in favor of them. We need to come to grips with the possibility that our intuitions could be wrong -- perhaps to the extent that we recognize that empirical results that seem to support our intuitions require extra scrutiny, just to be sure.

To do any less is to ask to be fooled, and that's the outcome scientific knowledge-building is trying to avoid.