ADVERTISEMENT
  About the SA Blog Network













Literally Psyched

Literally Psyched


Conceived in literature, tested in psychology
Literally Psyched Home

Humanities aren’t a science. Stop treating them like one.

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Will math help determine the Illiad's historic accuracy? Image credit: G. V. Tischbein, public domain, Wikimedia Commons.

There’s a certain allure to the elegance of mathematics, the precision of the hard sciences. That much is undeniable. But does the appeal mean that quantitative approaches are always germane? Hardly—and I doubt anyone would argue the contrary. Yet, over and over, with alarming frequency, researchers and scholars have felt the need to take clear-cut, scientific-seeming approaches to disciplines that have, until recent memory, been far from any notions of precise quantifiability. And the trend is an alarming one.

Take, for instance, a recent paper that draws conclusions about the relative likelihood that certain stories are originally based in real-world events by looking at the (very complicated) mathematics of social networks. The researchers first model what the properties of real social networks look like. They then apply that model to certain texts (Beowulf, the Iliad, and Táin Bó Cuailnge, on the mythological end, and Les Misérables, Richard III, the Fellowship of the Ring, and Harry Potter on the fictional end) to see how much the internal social networks of the characters resemble those that exist in real life. And then, based on that resemblance, they conclude which narratives are more likely to have originated in actual history: to wit, Beowulf and the Iliad are more likely reality-based than Shakespeare or Tolkien or—gasp—even that most real-life-like of narratives, Harry Potter. (Táin, on the other hand, isn’t very lifelike at all—but if you remove the six central characters, which you can totally do since they are likely amalgams of real ones, it, too, starts looking historical.)

But what is the analysis really doing? And more pressingly: what is the point? Is such work really a good use of scholarly resources (and British tax dollars, as the university that’s home to the study is publicly funded)?

I’m skeptical of this kind of approach—and not at all sure that it adds anything to our understanding of, well, anything. What is it really capturing, for one? A social network isn’t just an immutable thing. Consider what external factors might be involved in determining what an actual social network—and a literary one especially—might look like at any given point: the culture within which each work was created, the writing and storytelling conventions of the time, whether the work is single or multi-authored in reality, a part of oral lore or written on the spot. The list goes on and on. You can’t compare the networks of War and Peace and The Corrections, though both are weighty works of literary fiction, to see if one is more  “real” than the other. Literary conventions changes. Genre conventions change. Societal conventions change. And is today’s real-world social network really comparable on any number of levels to one, say, a thousand, or even five or one hundred years ago?

Can we apply today's standards to analyzing "War and Peace"? Image: 1869 first edition, public domain, Wikimedia Commons.

I don’t mean to pick on this single paper. It’s simply a timely illustration of a far deeper trend, a tendency that is strong in almost all humanities and social sciences, from literature to psychology, history to political science. Every softer discipline these days seems to feel inadequate unless it becomes harder, more quantifiable, more scientific, more precise. That, it seems, would confer some sort of missing legitimacy in our computerized, digitized, number-happy world. But does it really? Or is it actually undermining the very heart of each discipline that falls into the trap of data, numbers, statistics, and charts? Because here’s the truth: most of these disciplines aren’t quantifiable, scientific, or precise. They are messy and complicated. And when you try to straighten out the tangle, you may find that you lose far more than you gain.

It’s one of the things that irked me about political science and that irks me about psychology—the reliance, insistence, even, on increasingly fancy statistics and data sets to prove any given point, whether it lends itself to that kind of proof or not. I’m not alone in thinking that such a blanket approach ruins the basic nature of the inquiry. Just consider this review of Jerome Kagan’s new book, Psychology’s Ghosts, by the social psychologist Carol Tavris. “Many researchers fail to consider that their measurements of brains, behavior and self-reported experience are profoundly influenced by their subjects’ culture, class and experience, as well as by the situation in which the research is conducted,” Tavris writes. “This is not a new concern, but it takes on a special urgency in this era of high-tech inspired biological reductionism.” The tools of hard science have a part to play, but they are far from the whole story. Forget the qualitative, unquantifiable and irreducible elements, and you are left with so much junk.

Kagan himself analyzes the problem in the context of developmental psychology:

An adolescent’s feeling of shame because a parent is uneducated, unemployed, and alcoholic cannot be translated into words or phrases that name only the properties of genes, proteins, neurons, neurotransmitters, hormones, receptors, and circuits without losing a substantial amount of meaning.

Sometimes, there is no easy approach to studying the intricate vagaries that are the human mind and human behavior. Sometimes, we have to be okay with qualitative questions and approaches that, while reliable and valid and experimentally sound, do not lend themselves to an easy linear narrative—or a narrative that has a base in hard science or concrete math and statistics. Psychology is not a natural science. It’s a social science. And it shouldn’t try to be what it’s not.

Literature, psychology, and the list of culprits continues still. In a recent column for the New York Times, Richard Polt expresses the same cynicism with respect to human morality. “Any understanding of human good and evil,” he writes, “has to deal with phenomena that biology ignores or tries to explain away — such as decency, self-respect, integrity, honor, loyalty or justice.” And yet how often do researchers try to focus on biology, the “real” stuff, at the expense of all those other, intangible and difficult to parse phenomena? How do you even begin to quantify or science-ify those, try as you may?

Even linguistic analysis, an area that is less contentious, is fraught with difficulties. Witness the debate in a recent New Yorker article on linguistics in forensics: for every expert who tells you that models and statistical analyses can tell you something specific is one who makes a persuasive counter case—and both have facts and examples from history aplenty to back up their claims. It’s hard to quantify and to have precise conclusions when you deal with qualitative phenomena—but the temptation to do so remains.

Clio, the muse of history. Image credit: Artemisia Gentileschi, 1632, public domain, Wikimedia Commons.

Nowhere is that temptation more evident than in history, where quantification and precise explanation is so incredibly enticing—and so politically useful. Witness the rise of Cliodynamics (no apologies to Clio, from whom it takes its name; I don’t think the muse would be overly thrilled): the use of scientific methodology (nonlinear mathematics, computer simulations, large-N statistical analyses, information technologies) to illuminate historical events – and, presumably, be able to predict when future “cycles” will occur.

Sure, there might be some insights gained. Economist Herbert Gintis calls the benefit analogous to an airplane’s black box: you can’t predict future plane crashes, but at least you can analyze what went wrong in the past. But when it comes to historical events—not nearly as defined or tangible or precise as a plane crash—so many things can easily prevent even that benefit from being realized.

To be of equal use, each quantitative analysis must rely on comparable data – but historical records are spotty and available proxies differ from event to event, issues that don’t plague something like a plane crash. What’s more, each conclusion, each analysis, each input and output must be justified and qualified (same root as qualitative; coincidence?) by a historian who knows—really knows—what he’s doing. But can’t you just see the models taking on a life of their own, being used to make political statements and flashy headlines? It’s happened before. Time and time again. And what does history do, according to the cliodynamists, if not repeat itself?

What happens when a non-historian takes the reigns on a historical model? Cartoon copyright: xkcd, http://xkcd.com/1063/

It’s tempting to want things to be nice and neat. To rely on an important-seeming analysis instead of drowning in the quagmire of nuance and incomplete information. To think black and white instead of grey. But at the end, no matter how meticulous you’ve been, history is not a hard science. Nor is literature. Or political science. Or ethics. Or linguistics. Or psychology. Or any other number of disciplines. They don’t care about your highly involved quantitative analysis. They behave by their own rules. And you know what? Whether you agree with me or not, what you think—and what I think—matters not a jot to them.

It’s tempting to think linearly and in easily graspable chunks. It would make things a whole lot easier and more manageable if everything came down to hard facts. Yes, we could say, we can predict this and avert that and explain this and understand that. But you know what? The cliodynamists, just like everyone else, will only know which cyclical predictions were accurate after the fact. Forgotten will be all of those that were totally wrong. And the analysts of myths only wait for the hits to make their point—but how many narratives that are obviously not based in reality have similar patterns? And whose reality are we dealing with, anyway? We’re not living in Isaac Asimov’s Foundation, with its psychohistorical trends and aspirations—as much as it would be easier if we were.

We’re held back by those biases that plague almost all attempts to quantify the qualitative, selection on the dependent variable and post hoc hypotheses and explanations. We look at instances where the effect exists and posit a cause—and forget all the times the exact same cause led to no visible effect, or to an effect that was altogether different. It’s so easy to tell stories based on models. It’s so hard to remember that they are nothing more than stories. (It’s not just history or literature. Much of fMRI research is blamed for precisely that reason: if you don’t have an a priori hypothesis but then see something interesting, it’s all too tempting to explain its involvement after the fact and pretend that that’s what you’d meant to do all along. But the two approaches are not one and the same.)

I’m all for cross-disciplinary work. But this is something else.

When we relegate the humanities to a bunch of trends and statistics and frequencies, we get exactly that disconcerting and incongruous dystopia of Italo Calvino’s If on a Winter’s Night a Traveler: books that have been reduced to nothing but words frequencies and trends, that tell you all you need to know about the work without your ever having to read it—and machines that then churn out future fake (or are they real?) books that have nothing to do with their supposed author. It’s a chilling thought.

The tools of mathematical and statistical and scientific analysis are invaluable. But their quantifiable certainty is all too easy to see as the only “real” way of doing things when really, it is but one tool and one approach—and not one that is translatable or applicable to all matters of qualitative phenomena. That’s one basic fact we’d do well not to forget.

 

Pádraig Mac Carron, & Ralph Kenna (2012). Universal Properties of Mythological Networks EPL 99 (2012) 28002 arXiv: 1205.4324v2
 
Spinney L (2012). Human cycles: History as science. Nature, 488 (7409), 24-6 PMID: 22859185

Maria Konnikova About the Author: Maria Konnikova is a writer living in New York City. She is the author of the New York Times best-seller MASTERMIND (Viking, 2013) and received her PhD in Psychology from Columbia University. Follow on Twitter @mkonnikova.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 35 Comments

Add Comment
  1. 1. byronraum 12:37 pm 08/10/2012

    You are making exactly the argument against the process that turned, eventually, physics, chemistry and biology into sciences.

    Link to this
  2. 2. Mark Martino 12:46 pm 08/10/2012

    Thank you for discussing this issue. When studying for my BFA from a state university, a few of my professors complained that if one is doing masters or doctoral work in the fine arts, that work is treated as research. This was bad for them because, as artists, we’re not researching anything. We’re exploring ways to express ourselves. The practical, or impractical, effect of this is to emphasize being unique and original for its own sake at the expense of aesthetics and expression. This, for example, helped spawn the conceptual art movement. After I got a BSEE, I realized that most of these pieces weren’t art; they were poorly done science experiments.

    Link to this
  3. 3. RSchmidt 1:00 pm 08/10/2012

    I can see your point but we should be careful to avoid the fallacy of the perfect solution. Certainly the humanities are more difficult to reduce than physics but I don’t agree that there is no benefit in doing so. Certainly it is questionable if we can ever answer the question, why is a certain image beautiful and another ugly, but we can quantify how people in general react to a certain image. The advertising industry is based on quantifying subjective human experience. One must always be aware of the limitations of one’s tools and processes but taking a black and white position by saying, “Humanities are not science”, seems to be just going to the other extreme. I don’t believe that the concept of science is defined by the ability to understand a subject in absolute terms. It is a process. If we don’t follow the scientific approach then which approach do we take? Are linguistics, psychology, polysci, etc, really just opinions? Certainly when we talk about humans and complex phenomenon it is difficult to frame anything in terms of absolutes but that doesn’t mean a scientific approach is not warrented. Physics has to cope with chaos and heisenberg all the time yet we wouldn’t say it isn’t a science just because it isn’t able to provide us with absolutes.

    Link to this
  4. 4. Polednice 1:02 pm 08/10/2012

    I think this article is a prime example of throwing the baby out with the bath water. “Bad science is being done in the humanities, so let’s have NO science at all”. Does that really seem reasonable to you?

    The humanities are just getting to grips with the value of scientific approaches to their subjects, so it’s no wonder that there are going to be all sorts of ridiculous articles popping up. It also doesn’t help that you can get through an arts degree without a course in scientific literacy and critical thinking. You end up with people trying to use the scientific method (or criticising it) without knowing how it works and what it’s really for.

    What we should be doing is engaging with these articles skeptically, pointing out how they are wrong, and then pushing people in these fields to do better. The most striking thing about your take on this is that you don’t offer an explicit alternative – you say that things aren’t quantifiable, they’re not easily defined; but what is this but hand-waving? You’re draping things in unnecessary mystery.

    If we abandon scientific approaches to the arts and humanities, we are left with nothing better – just hoards of interpretative nonsense about things authors never wrote about and artists never painted. At least with science, academics are trying to reach objective truths (though stumbling). Without it, we’re left with more of the same useless, subjective waffle that is made redundant when the next academic fashion takes hold.

    Link to this
  5. 5. saveafew 1:42 pm 08/10/2012

    Considering this two quotes:

    “Psychology is not a natural science. It’s a social science”

    “But at the end, no matter how meticulous you’ve been, history is not a science. Nor is literature. Or political science. Or ethics. Or linguistics. Or psychology. Or any other number of disciplines”

    I believe you should be more clear about when you say history (or any other example you used) is not a science, because you contradict yourself in those two quotes.

    If I understood correctly, you mean history and psycology aren’t hard (or natural) sciences, but soft (or social) science, and not that they aren’t sciences at all. I don’t know if you read or answer any comments, but I think a clarification about this point would be important.

    Link to this
  6. 6. Mark Martino 2:17 pm 08/10/2012

    The classification of sciences as being hard or soft may not be as useful as focusing on how well the scientific process is executed for particular scientific studies. Certain subatomic physics studies seem soft to me, and that’s fine. You have to start a new branch of exploration somewhere.

    If that involves crossing over into the humanities, that’s fine too. I’d be careful about deriving conclusions about aesthetics and self expression from scientific studies, but I wouldn’t cast them aside. As an artist and an engineer, I find studies on perception useful.

    Link to this
  7. 7. Bionate 3:43 pm 08/10/2012

    I don’t mean to speak for the author but I think a lot of people are misinterpreting what they actually meant. Scientific methodologies are a very powerful tool and in fact so powerful that there is a great temptation to suggest they’re the only ones necessary. The simple truth is philosophers and historians have been debating the same basic issues for hundreds if not thousands of years. Meanwhile scientific reductionism has genuinely expanded our understanding of the universe and the natural world. It so much easier to say this gene causes alcoholism or makes you more susceptible to alcoholism. However, they tell you nothing about the disabled person or the alcoholic and that’s the real problem. When dealing with the humanities and social sciences there is simply too much beyond the purview of the reductionist answer to call it the answer.

    I trained as a biologist which is still considered a science but because biology interacts in complicated ways which are not always apparent I’m comfortable with fuzzy. I also enjoy anthropology and archaeology I mention it because archaeology is particularly relevant. There is a great deal to archaeology which lends itself to scientific methodologies and inquiry. Science can help to reconstruct an extinct culture but it will never give us the complete answer. It cannot guarantee who their gods were, what language they spoke or what they found beautiful. We can learn a lot but not everything.

    So when it comes to the humanities and social sciences my answer is yes. Yes, it has a place and a value within other disciplines and yes it needs to be looked at within the context of the research. It is a tool nothing more and nothing less and just like any other tool it has its uses and it has its failings. It is therefore incumbent upon the researcher to decide where those lines are.

    Link to this
  8. 8. priddseren 5:13 pm 08/10/2012

    It is nice to read an article that describes perfectly the problem with calling something like political science or sociology a science, they are not and never will be science. Even better is these so called sciences use of statistics to make it seem like their “evidence” is hard fact and using those statistics to in effect prove anything. To make matters worse, real science is being infiltrated by this ridiculous use of statistical math to bypass doing real science when they should be, such as the so called “facts” of global warming. Especially, the predicted results of global warming being treated as hard fact, when it is all based on guessing with statistical models to back it up as so called “proof” So not only is calling humanities a science and using scientific methods to prove anything in humanities a problem, we are losing real science as well as more and more hard science is bypassed because it is easier to make guesses and assumptions, then find a clever equation to “prove” it. In effect, junk science started out with ridiculous “sciences” like political science but those same methods are now causing most of real science to be junk science as well.

    Link to this
  9. 9. Heidi Harley 5:38 pm 08/10/2012

    Science is about hypothesis-testing against ‘hard facts’, i.e. observations about phenomena in the real world. Those facts may or may not be numerical in character. If my model of standard American English grammar predicts that “Went you to the store?” is a well-formed sentence of American English, and I check with a native speaker of the grammar that I am trying model and discover that they do not accept or produce that sentence, then my hypothesis (my prediction-making model) is falsified, and must be revised.

    That’s science.

    Link to this
  10. 10. Mythusmage 6:53 pm 08/10/2012

    Sounds to me like somebody’s jealous.

    Link to this
  11. 11. theumier 6:55 pm 08/10/2012

    I have long told anyone who would listen about my complaint about people trying quantify the inherently unquantifiable. I see this in academia where a faculty evaluation system reduces the professor’s activities and efforts to a single number, or where somehow the citation index is supposed to be a true measure of something. And I’ve worried that people think that just because they can somehow gather some numbers, they can whup statistics on them and deduce some sort of truth. Maybe, maybe not. Do they really know whether their populations fit the statistics they are applying?

    The truth is, Ms. Konnikova’s assertion, that the so-called soft sciences get off track when they rely on numbers to feel scientific (my interpretation of what she said), is right on track. Let them be what they are, and let them apply statistics where it really belongs. If they want to be called sciences, then they should apply scientific methods, which includes NOT using mathematical tools where they really do not belong.

    Link to this
  12. 12. geekphilosopher 9:12 pm 08/10/2012

    I’m rather disappointed in her confusion of humanities and social sciences at various points throughout her article. This says to me that she doesn’t really know the difference. I’m pretty sure analytical philosophers would have a bone or two pick with her over her claim that humanities (which contains philosophy, which is the root of math and science) doesn’t need mathematical tools.

    Granted, some of the study examples she points to are a bit silly, but I disagree wholeheartedly that the humanities and social science shouldn’t try to re-evaluate themselves through the eyes of other disciplines. If anything I would say the various branches of knowledge don’t speak to each other enough, so much so that they occasionally reinvent the wheel… and worse, think they were the first to invent it.

    For example, I don’t mind evolutionary biology having a discussion about the meaning of life as long as they include the long and enduring philosophical thought on the topic in the discussion. We have a lot to teach each other if we are willing to listen.

    Link to this
  13. 13. benbradley 11:34 pm 08/10/2012

    Scientific investigation has been successfully applied to “soft” fields for decades. Circa 1970 SA had an article on using word frequency to help determine if lost works, texts whose authorship was unknown, were written by Shakespeare or other great writers.

    The problem isn’t whether science should be applied to these fields, but HOW it is applied. These ‘studies’ are being done by non-scientists (though educated in their field of study), and worse, they see themselves getting a post-graduate degree in their own field and may not see themselves as “doing science.” Those experienced with both hard science AND the field of study in question are in a much better position to to come up with hypotheses that are likely to lead to fruitful results. Cross-fertilization of fields can be very fruitful, but communications have to go both ways across fields.

    Link to this
  14. 14. scientific earthling 11:41 pm 08/10/2012

    How about the most stupid subject that uses science as a kind if suffix to give it some standing. Religious science!

    However this is very similar to mineral oil and other mining or quarrying companies referring to themselves as Environmental companies. Pretend to be the opposite of what you are. The most vocal homophobes are probably homosexual.

    Link to this
  15. 15. Chryses 7:48 am 08/11/2012

    A well articulated and balanced argument for usefully distinguishing Sciences and Humanities. Thank you.

    Link to this
  16. 16. eve-lyn 9:25 am 08/11/2012

    I have both an arts degree and a science degree. I think there are subjects that are not suitable to scientific research but are researched thoroughly none the less; English literature, sociology, or philosophy but then you have the shady subjects such as archeology and psychology that can have their feet in both camps. For instance carbon dating and geophysical investigation are definetly scientific fields within archeology whilst neuropsychology is surely a science. It is the method of research perhaps that defines the validity of the label science. Qualitative research is not scientific but can be worthwhile and reproducable research whilst quantitative research into emotions and the brain using MRIs, FMRIs or EEGs surely is science. Time perhaps to be flexible and remember tempora mutantur, nos et mutamur in illis…

    Link to this
  17. 17. Ann V 7:17 pm 08/11/2012

    One thing I’ve been thinking about lately are the parallels between science and history –and the way that history gives us the opportunity to look back on “experiments” that have already happened. Of course, there are no controls and too many variables to consider, in a strictly scientific sense, but in all honestly, most all experiments seem to suffer from limits. Scientists do an experiment, look at what happened, and then tell a story about what happened, often aiming to identify cause and effect. Historians don’t DO the experiment, but with reflection and research, they pluck out the situation, and then tell a story about what happened, aiming to identify cause and effect. Jared Diamond’s book Collapse is an interesting melding of these two approaches.

    Link to this
  18. 18. belarius 12:41 am 08/12/2012

    Writing as a scientist, I can’t emphasize enough how thoroughly the author has mischaracterized the use of statistical techniques in the social and medical sciences. It’s bad enough that the general public doesn’t understand the difference between statistical rigor and razzle dazzle, but for science journalists to fail to make the distinction is genuinely depressing.

    The author writes, “…most of these disciplines aren’t quantifiable, scientific, or precise. They are messy and complicated.” What she evidently fails to appreciate is that all sciences are messy and complicated. The vision of “science” as a world of sterile labs and perfect models is a children’s cartoon that does disservice to the genuinely difficult work that real scientists do.

    The author evidently also fails to appreciate the degree to which psychology is also a medical science, and as such, in which the stakes are high and the human cost of backing a spurious theory can be substantial. Mental health is no less a medical priority than physical health, and we have come a long way since the days when clinicians spun new theories as if they were poetry, disregarding the evidence.

    But at the heart of this profoundly misguided article is a misunderstanding of how analytic tools are used to ask empirical questions. No statistically competent person would use the word “prove” in a statistical context unless justifying a theorem. Statistics is humanity’s most sophisticated means of weighing evidence, and far more harm arises from misinterpreting statistical results than comes from misapplying statistical tests. Crucially, statistical tests never “prove” a hypothesis.

    Certainly, some areas of psychology suffer from considerable measurement problems, both in the literal sense that some measures are imprecise, and in the theoretical sense that some measures don’t measure the thing which they are intended to. Other measures in psychology are just as precise as the natural sciences, because they overtly overlap with those disciplines (biology and neuroscience, typically). Indeed, whether psychology is a natural science or not depends on which variety of psychology you are engaged in. My colleagues working in genetics and neurophysiology would be surprised to learn that theirs is a “social science.”

    I agree that qualitative analyses will always have an important place; it’s such a banal and obvious statement that I would be a fool to disagree with it. But the author makes an absurd false dichotomy, as if more quantification means less qualification. Further, her position that the proliferation of statistical methods is a step in the wrong direction is at once laughable and alarming. It seems like a dismissal arising from ignorance rather than from any familiarity with statistics.

    Link to this
  19. 19. gesimsek 11:24 am 08/12/2012

    Predicting what would happen next is part of human cognition. Everybody learn from their experiences and try to develop a response to life’s problems to survive and develop. Words and numbers are two different languages to understand the life and ourselves. However, there are certain matters beyond the capacities of words and numbers. Accepting these do not reduce the importance of science or language but it is a necessary point for the big “leap”

    Link to this
  20. 20. hanmeng 8:55 pm 08/12/2012

    If Ms. Konnikova believes that mathematics applied to literary texts is a poor use of scholarly resources, she should do a little reading in what passes for scholarship in most English departments (even at Columbia).

    By the way, it’s nice to have an article that’s actually about science, even if it’s “only” by a psychologist. (Joking!)

    Link to this
  21. 21. David Marjanović 9:36 am 08/13/2012

    But what is the analysis really doing? And more pressingly: what is the point? Is such work really a good use of scholarly resources (and British tax dollars, as the university that’s home to the study is publicly funded)?

    Come on. How much can it possibly have cost?

    Besides, isn’t it flat-out evil to even ask such questions when military expenses pretty much around the world are way higher than any realistic threat can justify?

    I’m skeptical of this kind of approach—and not at all sure that it adds anything to our understanding of, well, anything. What is it really capturing, for one? A social network isn’t just an immutable thing. Consider what external factors might be involved in determining what an actual social network—and a literary one especially—might look like at any given point: the culture within which each work was created, the writing and storytelling conventions of the time, whether the work is single or multi-authored in reality, a part of oral lore or written on the spot. The list goes on and on. You can’t compare the networks of War and Peace and The Corrections, though both are weighty works of literary fiction, to see if one is more “real” than the other. Literary conventions changes. Genre conventions change. Societal conventions change. And is today’s real-world social network really comparable on any number of levels to one, say, a thousand, or even five or one hundred years ago?

    All this sounds like the answers are hidden in the math of the paper – or not, but you didn’t look. You just speculate about it.

    Every softer discipline these days seems to feel inadequate unless it becomes harder, more quantifiable, more scientific, more precise. That, it seems, would confer some sort of missing legitimacy in our computerized, digitized, number-happy world.

    No, you’re misunderstanding. Quantitative hypotheses are easier to falsify, easier to assess, than purely qualitative ones. They make much more useful predictions.

    most of these disciplines aren’t quantifiable, scientific, or precise. They are messy and complicated.

    This is by no means a contradiction. Biology is a glorious mess, and entire branches of statistics have been invented specifically to deal with it.

    “Many researchers fail to consider that their measurements of brains, behavior and self-reported experience are profoundly influenced by their subjects’ culture, class and experience, as well as by the situation in which the research is conducted,” Tavris writes.

    That’s true, but it’s a completely different problem. :-| Like everything else, statistics obeys the law of “garbage in, garbage out”. I’ve been spending years detecting the garbage in just one data matrix for phylogenetic analysis, and this work is still not publishable, let alone finished.

    Psychology is not a natural science. It’s a social science. And it shouldn’t try to be what it’s not.

    It should, however, try to be science.

    But you know what? The cliodynamists, just like everyone else, will only know which cyclical predictions were accurate after the fact. Forgotten will be all of those that were totally wrong.

    …And that’s suddenly a bad thing now? People like Popper and Medawar have praised this as the most important feature of science, to the point that they neglected parsimony too much! You make a testable hypothesis, and then you wait for the test. Science.

    What is really bad is when the falsified hypotheses are not ditched (for any reason, usually ignorance). That happens a lot in politics and economics; it’s why historical mistakes tend to be repeated.

    But can’t you just see the models taking on a life of their own, being used to make political statements and flashy headlines? It’s happened before.

    That can happen to scientific as well as to unscientific hypotheses. I’m not sure what your point is.

    We look at instances where the effect exists and posit a cause—and forget all the times the exact same cause led to no visible effect, or to an effect that was altogether different.

    This bias is exactly what the scientific method is designed to counter.

    It also doesn’t help that you can get through an arts degree without a course in scientific literacy and critical thinking.

    It also doesn’t help that you can graduate from highschool without having ever been taught how science works!!!

    At least with science, academics are trying to reach objective truths

    Well, technically, we’re trying to reach objective falsehoods. Truth is found, if at all, only by elimination.

    Suppose we found the truth. How could we find out that what we’ve found is indeed the truth? By comparing it to the truth, which we don’t already have?

    The classification of sciences as being hard or soft may not be as useful as focusing on how well the scientific process is executed for particular scientific studies.

    QFT!

    It so much easier to say this gene causes alcoholism or makes you more susceptible to alcoholism. However, they tell you nothing about the disabled person or the alcoholic and that’s the real problem.

    Well, they do tell you something

    When dealing with the humanities and social sciences there is simply too much beyond the purview of the reductionist answer to call it the answer.

    The whole is more than the sum of its parts.
    Agreed?
    Fine.
    The whole is the sum of its parts plus the sum of the interactions between those parts plus the sum of the interactions between these interactions plus the sum of the interactions between those interactions plus
    Agreed?
    If so, how about we try to figure out the parts first, then the interactions between the parts, then the interactions between the interactions…

    That’s reductionism.

    Science can help to reconstruct an extinct culture but it will never give us the complete answer. It cannot guarantee who their gods were, what language they spoke or what they found beautiful.

    It can, however, tell us quite a lot about what they expected each other to find beautiful.

    I don’t mind evolutionary biology having a discussion about the meaning of life

    (Not going to happen, because there’s no evidence of a meaning. But I digress.)

    Scientists do an experiment [...] Historians don’t DO the experiment

    Plenty of scientists don’t do experiments. There aren’t many that can be done in paleontology, geology, astrophysics…

    The important thing for testing hypotheses is observation. Experiments are a convenient way to arrange for opportunities for observation; that helps, but it’s not necessary.

    No statistically competent person would use the word “prove” in a statistical context unless justifying a theorem.

    Similarly, most scientists go to great lengths to avoid that word when they’re not talking about math.

    Crucially, statistical tests never “prove” a hypothesis.

    They disprove the null hypothesis at a certain level of probability, and they give you that probability.

    Link to this
  22. 22. David Marjanović 9:38 am 08/13/2012

    Besides, isn’t it flat-out evil to even ask such questions when military expenses pretty much around the world are way higher than any realistic threat can justify?

    By which I mean… there’s apparently money to burn; it’s purely a question of political will, not of availability of money, how much to funnel into (any kind of) science.

    Link to this
  23. 23. M.Lake 2:44 pm 08/14/2012

    The semi-manichaean fracture between hard sciences and humaties has been considered as a solved problem since the analysis given by Wittgenstein on his “Resolutions”.

    Hard Science is based over analytical, vertical processes, meanwhile all of Humanities stand by their “serendipical”, horizontal properties.
    We can count from 0 to infinite, but we cannot create any kind of intrinsic evaluations between numbers; at the same time, we can digress on colors over their aesthetic properties, without being able to organize them in a linear, mathematical fashion.

    Beware that those aren’t just philosophical consideration, but the very setting stones that defined the Information Theory that makes the very electronic tools that we’re using physically possible in first place (Can we say “Turing Machine” or even “Jackbson Communication Model”?).

    Hard sciences are based over sensible experiences. Diverging from them makes all of our considerations over nature void and null.

    Humanities are based over dialectic experiences. Whether we want to call them as Plato’s “Perfect Forms” or Frontal Lobe makes little difference. Using an analytical process here makes as much sense as using religion as a mean to study physics. Useless, if not dangerous.

    What those analytical approach creates is just a limitation fo the frame of investigation. Yes, of course we cannot ignore the sensible experience while analyzing the dialectic one. The former influences the latter, but not vice-versa. But is not the whole picture.
    What we can do then? The best we can do is not forfeit the idea of a “deterministic” investigation of humanities, of course, but to find a common ground.

    My philosophy professor had the habit to say “Logic is what’s left of the Scientific Method when the sensible experience goes on holiday”. That, and the revolutions that Information technology today offers are probably the best path to follow.

    (By the way, sorry about any possible bad form. I’m not a native english speaker. Hope that the concept is comprensible enough.)

    Link to this
  24. 24. David Marjanović 9:57 am 08/15/2012

    I wrote:

    They disprove the null hypothesis at a certain level of probability, and they give you that probability.

    Actually, even this is not quite correct. Actually, they tell you how probable it was to get your data assuming that the null hypothesis is correct. If that probability is tiny, and yet you did get your data, you’ll conclude that the null hypothesis is actually wrong…

    Hard Science is based over analytical, vertical processes, meanwhile all of Humanities stand by their “serendipical”, horizontal properties.

    Please explain what that means. And what do you mean by “dialectic”?

    We can count from 0 to infinite, but we cannot create any kind of intrinsic evaluations between numbers; at the same time, we can digress on colors over their aesthetic properties, without being able to organize them in a linear, mathematical fashion.

    We’re not able to do that? Then how do computer screens work?

    I’m not a native english speaker.

    Neither am I.

    Hope that the concept is comprensible enough.

    By “sensible experience”, you mean “experience of the senses”, “perception”, “observation”, right?

    Link to this
  25. 25. Apxeo 8:42 pm 08/15/2012

    Thanks for this post. I can’t really speak to psychology, but definitely agree with you on the current efforts to science-up the historical disciplines. I have no objection to quantitative methods when it is done by someone who understands the data (which, I hope, would be a minimal requirement for good science as well as good history). I just want to emphasise that none of the researchers in the historical examples you gave are trained in the field. Carron and Kenna are physicists, while Turchin (the “cliodynamicist”) is an evolutionary biologist. So it is not so much a case of a “soft” field trying to harden up, but of blissfully unaware scientists thinking that all it takes to make history a science is the correct application of method. They usually fall into the trap of just mining sources for numbers, without realising the numbers are often being generated by very different processes. The historical disciplines actually had their big science/quantitative thrill back in the 70s and 80s, leaving most scholars much more cautious about the limitations of their data and the complexities of their subjects. If there is an area where you really do need to worry about data being socially or culturally constructed, it is history.

    Link to this
  26. 26. renebekkers 6:11 am 08/17/2012

    “It is better to be vaguely right than exactly wrong” – Carveth Read

    Link to this
  27. 27. rspilman 2:18 pm 08/17/2012

    I found myself thinking of The Daughter of Time, a humble detective novel that asserted the obvious: that history is fabricated by the winners. A scientific basis for history to stand on would require a foundation, but all foundations would be speculative. So would applying current social models to characters in books or plays written years ago. But most interesting to me is Thomas Kuhn’s suggestion that this process often applies to the sciences, where the paradigm creates the result. If that is true, one might want to use the humanities to understand science, not vice versa.

    Link to this
  28. 28. Brian Donohue 10:29 am 08/18/2012

    Is there not some irony in the fact that the softer side of academia is attempting to cordon itself off from scientific approaches when so many in that sphere, to this day, are in the thrall to Marx, who had the hubris to attempt a scientific explanation for everything (in a ridiculously crude way with 19th century tools, of course)?

    Link to this
  29. 29. David Marjanović 4:43 am 08/19/2012

    “It is better to be vaguely right than exactly wrong”

    It is usually better to be wrong for the right reasons than right for the wrong reasons. The former will lead you to be right most of the time, and it’ll make you able to correct your mistakes; the latter won’t make you right ever again, except – again – by chance.

    But most interesting to me is Thomas Kuhn’s suggestion that this process often applies to the sciences, where the paradigm creates the result.

    Kuhn exaggerated a lot. Exhibit A: how quickly and thoroughly plate tectonics was accepted – much faster than the old generation died out.

    so many in that sphere, to this day, are in the thrall to Marx

    [citation needed]

    Link to this
  30. 30. curtott 8:56 pm 08/20/2012

    if a boat floats, it doesnt matter how deep the water is. similarly, you can have all the numbers you want, but it takes a humanistic understanding to show why they matter. why would i not welcome a quantitative study of the ocean floor? its not impossible that up here at the surface, where the living people are, the info might tell us something about tides or currents that we can use to make life better for those we care about. anyone who thinks that numbers can substitute for qualitative substance is drowning anyway.

    Link to this
  31. 31. pardnerh 6:10 pm 08/21/2012

    The scientific process should be cautious of starting with “philosophical givens.” This strategy can undermine a better explanation. Historically, science begins with “philosophical givens” often pre-determined by those in charge. On occasion, science can begin with an idea and from there pound the circle into the square and say it fits while it is in the observation of splinters that an ineffectual outcome can be determined. Hypothesis testing and the scientific process have their own historical roots to examine. Thank you for reminding us that in science “everything is related” and that statistical analysis must entertain alternatives that encompass the big picture. Novel ideas are for science what Houdini was for straitjackets. Science needs to search for better alternatives when it meets a road block, not a bigger hammer.

    “Don’t worry about someone stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.” – Howard H. Aiken, American computer pioneer (1900-1973)

    Link to this
  32. 32. johngthomas 6:58 am 09/3/2012

    This piece shows that statistical analysis can be misused in the Humanities, but drawing a line between the Humanities and the Sciences isn’t easy. The humanities are informed by the sciences and the humanities do science too. The reverse is also true.

    Link to this
  33. 33. iyashable 10:36 am 09/7/2012

    considerable chunk of the gist in this article appears to me as an attempt to repeat what daisaku ikeda goes on and on in his ‘The human revolution’ on youth culture and perhaps in his every book. Moreover, I like the resarch done by maria in penning down this, i will look forward to reading more on her blog. Thanks n cheerios. :B

    Link to this
  34. 34. Roberto1211 5:20 am 04/1/2013

    This is a good article that clearly shows why psychology is problematic as a ‘science’. But it was this sentence that struck me most:

    ‘Every softer discipline these days seems to feel inadequate unless it becomes harder, more quantifiable, more scientific, more precise’

    This is very true indeed. But the same can be said the other way round; namely, that the harder disciplines feel they have a monopoly on what is inherently scientific and therefore true.

    Either way there seems to be a perceptual wish being played out here. Personally, I think this distinction between ‘soft’ and ‘hard’ is rather foolish. Psychology will always have problems, either if it is deemed scientific or not, because of its complex subject matter.

    The only thing psychology can do – and is actually doing – is to try to implement method in madness. What other alternative is there?

    Link to this
  35. 35. jstewart57 12:15 pm 04/22/2013

    Why do we want scientific answers to questions in the humanities? Is there really just one answer to a question about Shakespeare, other than his biographical details?
    How about democracy? Is there really only kind? Is there only one desirable kind?

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Holiday Sale

Give a Gift &
Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now! >

X

Email this Article

X