Skip to main content

Humanities aren't a science. Stop treating them like one.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


There’s a certain allure to the elegance of mathematics, the precision of the hard sciences. That much is undeniable. But does the appeal mean that quantitative approaches are always germane? Hardly—and I doubt anyone would argue the contrary. Yet, over and over, with alarming frequency, researchers and scholars have felt the need to take clear-cut, scientific-seeming approaches to disciplines that have, until recent memory, been far from any notions of precise quantifiability. And the trend is an alarming one.

Take, for instance, a recent paper that draws conclusions about the relative likelihood that certain stories are originally based in real-world events by looking at the (very complicated) mathematics of social networks. The researchers first model what the properties of real social networks look like. They then apply that model to certain texts (Beowulf, the Iliad, and Táin Bó Cuailnge, on the mythological end, and Les Misérables, Richard III, the Fellowship of the Ring, and Harry Potter on the fictional end) to see how much the internal social networks of the characters resemble those that exist in real life. And then, based on that resemblance, they conclude which narratives are more likely to have originated in actual history: to wit, Beowulf and the Iliad are more likely reality-based than Shakespeare or Tolkien or—gasp—even that most real-life-like of narratives, Harry Potter. (Táin, on the other hand, isn’t very lifelike at all—but if you remove the six central characters, which you can totally do since they are likely amalgams of real ones, it, too, starts looking historical.)

But what is the analysis really doing? And more pressingly: what is the point? Is such work really a good use of scholarly resources (and British tax dollars, as the university that’s home to the study is publicly funded)?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


I’m skeptical of this kind of approach—and not at all sure that it adds anything to our understanding of, well, anything. What is it really capturing, for one? A social network isn’t just an immutable thing. Consider what external factors might be involved in determining what an actual social network—and a literary one especially—might look like at any given point: the culture within which each work was created, the writing and storytelling conventions of the time, whether the work is single or multi-authored in reality, a part of oral lore or written on the spot. The list goes on and on. You can’t compare the networks of War and Peace and The Corrections, though both are weighty works of literary fiction, to see if one is more “real” than the other. Literary conventions changes. Genre conventions change. Societal conventions change. And is today’s real-world social network really comparable on any number of levels to one, say, a thousand, or even five or one hundred years ago?

I don’t mean to pick on this single paper. It’s simply a timely illustration of a far deeper trend, a tendency that is strong in almost all humanities and social sciences, from literature to psychology, history to political science. Every softer discipline these days seems to feel inadequate unless it becomes harder, more quantifiable, more scientific, more precise. That, it seems, would confer some sort of missing legitimacy in our computerized, digitized, number-happy world. But does it really? Or is it actually undermining the very heart of each discipline that falls into the trap of data, numbers, statistics, and charts? Because here’s the truth: most of these disciplines aren’t quantifiable, scientific, or precise. They are messy and complicated. And when you try to straighten out the tangle, you may find that you lose far more than you gain.

It’s one of the things that irked me about political science and that irks me about psychology—the reliance, insistence, even, on increasingly fancy statistics and data sets to prove any given point, whether it lends itself to that kind of proof or not. I’m not alone in thinking that such a blanket approach ruins the basic nature of the inquiry. Just consider this review of Jerome Kagan’s new book, Psychology’s Ghosts, by the social psychologist Carol Tavris. “Many researchers fail to consider that their measurements of brains, behavior and self-reported experience are profoundly influenced by their subjects' culture, class and experience, as well as by the situation in which the research is conducted,” Tavris writes. “This is not a new concern, but it takes on a special urgency in this era of high-tech inspired biological reductionism.” The tools of hard science have a part to play, but they are far from the whole story. Forget the qualitative, unquantifiable and irreducible elements, and you are left with so much junk.

Kagan himself analyzes the problem in the context of developmental psychology:

An adolescent's feeling of shame because a parent is uneducated, unemployed, and alcoholic cannot be translated into words or phrases that name only the properties of genes, proteins, neurons, neurotransmitters, hormones, receptors, and circuits without losing a substantial amount of meaning.

Sometimes, there is no easy approach to studying the intricate vagaries that are the human mind and human behavior. Sometimes, we have to be okay with qualitative questions and approaches that, while reliable and valid and experimentally sound, do not lend themselves to an easy linear narrative—or a narrative that has a base in hard science or concrete math and statistics. Psychology is not a natural science. It’s a social science. And it shouldn’t try to be what it’s not.

Literature, psychology, and the list of culprits continues still. In a recent column for the New York Times, Richard Polt expresses the same cynicism with respect to human morality. “Any understanding of human good and evil,” he writes, “has to deal with phenomena that biology ignores or tries to explain away — such as decency, self-respect, integrity, honor, loyalty or justice.” And yet how often do researchers try to focus on biology, the “real” stuff, at the expense of all those other, intangible and difficult to parse phenomena? How do you even begin to quantify or science-ify those, try as you may?

Even linguistic analysis, an area that is less contentious, is fraught with difficulties. Witness the debate in a recent New Yorker article on linguistics in forensics: for every expert who tells you that models and statistical analyses can tell you something specific is one who makes a persuasive counter case—and both have facts and examples from history aplenty to back up their claims. It’s hard to quantify and to have precise conclusions when you deal with qualitative phenomena—but the temptation to do so remains.

Nowhere is that temptation more evident than in history, where quantification and precise explanation is so incredibly enticing—and so politically useful. Witness the rise of Cliodynamics (no apologies to Clio, from whom it takes its name; I don’t think the muse would be overly thrilled): the use of scientific methodology (nonlinear mathematics, computer simulations, large-N statistical analyses, information technologies) to illuminate historical events – and, presumably, be able to predict when future “cycles” will occur.

Sure, there might be some insights gained. Economist Herbert Gintis calls the benefit analogous to an airplane’s black box: you can’t predict future plane crashes, but at least you can analyze what went wrong in the past. But when it comes to historical events—not nearly as defined or tangible or precise as a plane crash—so many things can easily prevent even that benefit from being realized.

To be of equal use, each quantitative analysis must rely on comparable data – but historical records are spotty and available proxies differ from event to event, issues that don’t plague something like a plane crash. What’s more, each conclusion, each analysis, each input and output must be justified and qualified (same root as qualitative; coincidence?) by a historian who knows—really knows—what he’s doing. But can't you just see the models taking on a life of their own, being used to make political statements and flashy headlines? It's happened before. Time and time again. And what does history do, according to the cliodynamists, if not repeat itself?

It’s tempting to want things to be nice and neat. To rely on an important-seeming analysis instead of drowning in the quagmire of nuance and incomplete information. To think black and white instead of grey. But at the end, no matter how meticulous you’ve been, history is not a hard science. Nor is literature. Or political science. Or ethics. Or linguistics. Or psychology. Or any other number of disciplines. They don’t care about your highly involved quantitative analysis. They behave by their own rules. And you know what? Whether you agree with me or not, what you think—and what I think—matters not a jot to them.

It’s tempting to think linearly and in easily graspable chunks. It would make things a whole lot easier and more manageable if everything came down to hard facts. Yes, we could say, we can predict this and avert that and explain this and understand that. But you know what? The cliodynamists, just like everyone else, will only know which cyclical predictions were accurate after the fact. Forgotten will be all of those that were totally wrong. And the analysts of myths only wait for the hits to make their point—but how many narratives that are obviously not based in reality have similar patterns? And whose reality are we dealing with, anyway? We’re not living in Isaac Asimov’s Foundation, with its psychohistorical trends and aspirations—as much as it would be easier if we were.

We’re held back by those biases that plague almost all attempts to quantify the qualitative, selection on the dependent variable and post hoc hypotheses and explanations. We look at instances where the effect exists and posit a cause—and forget all the times the exact same cause led to no visible effect, or to an effect that was altogether different. It’s so easy to tell stories based on models. It’s so hard to remember that they are nothing more than stories. (It’s not just history or literature. Much of fMRI research is blamed for precisely that reason: if you don’t have an a priori hypothesis but then see something interesting, it’s all too tempting to explain its involvement after the fact and pretend that that’s what you’d meant to do all along. But the two approaches are not one and the same.)

I’m all for cross-disciplinary work. But this is something else.

When we relegate the humanities to a bunch of trends and statistics and frequencies, we get exactly that disconcerting and incongruous dystopia of Italo Calvino’s If on a Winter’s Night a Traveler: books that have been reduced to nothing but words frequencies and trends, that tell you all you need to know about the work without your ever having to read it—and machines that then churn out future fake (or are they real?) books that have nothing to do with their supposed author. It’s a chilling thought.

The tools of mathematical and statistical and scientific analysis are invaluable. But their quantifiable certainty is all too easy to see as the only “real” way of doing things when really, it is but one tool and one approach—and not one that is translatable or applicable to all matters of qualitative phenomena. That's one basic fact we'd do well not to forget.

 

Pádraig Mac Carron, & Ralph Kenna (2012). Universal Properties of Mythological Networks EPL 99 (2012) 28002 arXiv: 1205.4324v2

 

Spinney L (2012). Human cycles: History as science. Nature, 488 (7409), 24-6 PMID: 22859185

Maria Konnikova is a science journalist and professional poker player. She is author of the best-selling books The Biggest Bluff (Penguin Press, 2020), The Confidence Game (Viking Press, 2016) and Mastermind: How to Think Like Sherlock Holmes (Viking Press, 2013).

More by Maria Konnikova