About the SA Blog Network



Blogging At The Intersection Of Psych and Pop Culture
PsySociety Home

Psychology’s brilliant, beautiful, scientific messiness.

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

Today, sitting down to my Twitter feed, I saw a new link to Dr. Alex Berezow’s old piece on why psychology cannot call itself a science. The piece itself is over a year old, but seeing it linked again today brought up old, angry feelings that I never had the chance to publicly address when the editorial was first published. Others, like Dave Nussbaum, have already done a good job of dismantling the critiques in this article, but the fact that people are still linking to this piece (and that other pieces, even elsewhere on the SciAm Network, are still echoing these same criticisms) means that one thing apparently cannot be said enough:

Psychology is a science.

Shut up about how it’s not, already.

I clearly cannot just say that without explaining why psychology is a science, although sometimes I wish I could just join the biologists, chemists, and physicists who are never faced with having to answer such questions. So I will start by quoting the main thrust of Dr. Berezow’s argument, and then explaining why the 20-year-olds who take my Intro Social Psych class each semester could have told Berezow why he’s wrong by the end of our first week of class.

From Berezow’s piece:

Psychology isn’t science.

Why can we definitively say that? Because psychology often does not meet the five basic requirements for a field to be considered scientifically rigorous: clearly defined terminology, quantifiability, highly controlled experimental conditions, reproducibility and, finally, predictability and testability.

[To claim that psychology] is “science” is inaccurate. Actually, it’s worse than that. It’s an attempt to redefine science. Science, redefined, is no longer the empirical analysis of the natural world; instead, it is any topic that sprinkles a few numbers around.

First of all, if anyone is attempting to “redefine” science here, it is Berezow himself, with his claim that science has ever been limited to an empirical analysis of the natural world. I am an expert on psychology, not physics or chemistry, so there are many things about these fields that I do not have the expertise to fully address, and as a result my examples are sadly limited. But even with my limited perspective into the “hard sciences,” I know that there is no way anyone can claim that they all revolve around empirical analyses of observable facets of the natural world. From what I can gather, there are plenty of phenomena in the “hard sciences” — most notably, in physics — that are not observable. String theory? Quantum mechanics? I mean, for goodness sake — how long were physicists searching for the Higgs Boson without even knowing if it actually existed?! The technology required to search for it didn’t even exist for over 30 years. And “natural”? What does natural mean? Produced by nature, so we can’t count anything chemical or man-made as scientific? A tangible substance, so we can’t count anything theoretical? Selectively using these qualifications as an excuse to exclude psychology and other “soft sciences” (excuse me while I roll my eyes so hard that I risk sending them permanently into the back of my head) from the scientific discipline without questioning the fact that “hard sciences” routinely address topics that are both “unnatural” and “unobservable” is simply lazy.

Now, let’s address the charge that psychological concepts are “unquantifiable.” Admittedly, yes — this is often the most valid critique that can be leveled at psychology (although I’m sure the many cognitive psychologists I know who deal in fMRIs, ERPs, EEGs, reaction times, and eye tracking would cringe at this blatant misrepresentation of how they operate). But when it comes to us “social psychologists” who typically study fuzzier concepts (like feeeeeeelings), we already know that we must address these potential critiques of our field in a responsible, rigorous way. And we do. With something called operationalization.

During the very first week of my Intro to Social Psychology class, I send my students home with one simple assignment — come back next class with an answer to the question, “What is Love?” I play the Haddaway song as they’re walking out the door and tell them to come back in 2 days with a way that they would define and measure “love” if they were creating their own experiments.

A silly exercise? Sure. But after they’ve handed in their responses and they return to class the following Tuesday, I put up a pie chart that shows them exactly how much their answers truly varied.

About 25% of the students tend to suggest closed-ended self-report measurements. “Ask Person A how much she loves Person B on a scale of 1 to 5.”

About 25% of the students tend to suggest more open-ended self-report measurements. “Ask Person A to write about how much she loves Person B; ask several independent coders to read these responses and then provide judgments of how much Person A really seems to love Person B.”

About 25% of the students lean towards physiological measurements. “Hook Person A up to a bunch of psychophysiological equipment. When Person B comes in the room, measure how much Person A’s heart rate goes up and how much Person A’s palms sweat.”

And, finally, about 25% of the students find a way to surprise me. I can’t remember many of the specific responses that fell into this category off the top of my head, but my all-time favorite one had something to do with making Person A imagine that Person B was about to be pushed off a cliff and then finding out how many other people Person A would be willing to sacrifice in order to save Person B’s life. I’m not sure if that experiment would make it past the IRB, but I did really appreciate the creativity.

I put some of these examples up on the board, and the students all laugh. When they first got the assignment, they all inevitably thought it was just a fun, easy way to get quiz points during their first week of class — and that it was an excuse for me to play “What Is Love?” a few too many times. Almost all of them initially think that operationalizing love is an easy question with an obvious answer. They are almost universally surprised when they see the sheer diversity of their classmates’ responses, and come to realize that the answer they thought was “obvious” was not quite so obvious to their classmates after all.

Each and every semester, my class of 100 sophomore-aged undergraduates immediately comes to understand what Dr. Berezow has apparently yet to learn: Measurement is complicated. No matter what you study. Period. It’s complicated if you’re trying to explain what exactly Schrodinger was trying to say with his infamous cat problem. It’s complicated if you’re trying to figure out why there are EIGHT DIFFERENT SCALES that measure the exact same physical concept. It’s complicated if you’re trying to create the technology that might let you detect a hypothetical particle that, if found, would validate an entire model of particle physics. So why exactly are we supposed to be surprised that it’s still complicated when you’re trying to measure something abstract? Like feeeeeeelings?

There is nothing that makes “palm sweatiness” a more valid operationalization of love than “hypothetical willingness to sacrifice other people for your partner,” even if the former is the only indicator that is “natural” or “directly observable.” Yet we can see how each and every indicator might make sense, and might tell us something unique about that person and his/her relationship. Maybe palm sweatiness is a great indicator of the amount of sex a couple will have during the next few months, but willingness to sacrifice is a better indicator of how long that couple will ultimately stay married. Does that mean that we have to throw the entire concept of “love” out the window because there are many different ways to measure it and these operationalizations have different correlations with our outcomes of interest? Does that mean that love is no longer worth empirical examination? Does that mean that it’s no longer worth trying to approximate how we can study and improve human relationships? Or does it just mean that we might have to suck it up and give some thought to the theoretical basis of our operationalizations so we can confidently justify our operational choices, recognize their weaknesses, and understand their strengths?

Now, psychology can — and does — run into problems when operationalization is inconsistent or abused. For example, this can be how you end up with too many “researcher degrees of freedom.” Make no mistake: If a psychologist tries ten different operationalizations of love and only one results in a significant finding, so that operationalization is the one that is chosen for publication, that is wrong. If a psychologist can’t get the finding that he/she wants with the typically accepted operationalization for a concept and goes with something totally untested for no good theoretical reason just because it happens to provide a p-value less than .05, that is wrong.

But operationalization itself? The creation, validation, and testing of an operational definition that will serve as the proxy for an unobservable or abstract concept?

That’s science, baby. Take it or leave it.

In the end, wouldn’t it make more sense to just appreciate the nuances involved in operationalization instead of dismissing operational definitions themselves as an inherent weakness of an entire discipline? Rather than throwing out psychology, why don’t we throw out sensationalist headlines that undermine how much hard work goes into psychological operationalization in the first place? If only we science writers — myself included — would pay more attention to how we treat operationalization when we write about psychological research, I think we could all be amazed at how much more information we would glean from these discussions. The thought, the creativity, the pure brilliance that goes into finding measurable, testable proxies for “fuzzy concepts” so we can experimentally control those indicators and find ways to step closer, every day, towards scientifically studying these abstractions that we once thought we would never be able to study — that’s beautiful. Quite frankly, it’s not just science — it’s an art. And often times, the means that scientists devise to help them step closer and closer towards approximating these abstract concepts, finding different facets to measure or different ways to conceptualize our thoughts, feelings, and behaviors? That process alone is so brilliant, so tricky, and so critical that it’s often worth receiving just as much press time as the findings themselves.

At the end of the day, what a sad, simplistic view of science we must have if we want to throw the baby out with the bathwater every time this complicated, beautiful world we live in gets just a little bit messy.

Featured image available via Creative Commons from antsandneedles at DeviantArt.

Melanie Tannenbaum About the Author: Melanie Tannenbaum is a doctoral candidate in social psychology at the University of Illinois at Urbana-Champaign, where she received an M.A. in social psychology in 2011. Her research focuses on the science of persuasion & motivation regarding political, health-related, and environmental behavior. You can add her on Twitter or visit her personal webpage. Follow on Twitter @melanietbaum.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 18 Comments

Add Comment
  1. 1. ultimobo 5:54 pm 08/13/2013

    as social animals we will always be interested to compare ourselves with others, and this requires numbers so we can reassure ourselves that 80% of us are smarter than average …

    Link to this
  2. 2. Patrick A Scannell 5:57 pm 08/13/2013

    I agree that the present author’s approach of psychology is certainly scientific. Behaviour is certainly measurable and quantifiable. The type of psychology I think is not scientific is psycho-analysis, Jungian/Freudian psychology. This in now way discounts the validity of their approoaches, in fact, I am an avid fan of their system of psycho-analysis. I do not think that a system requires scientific “validity” to have real-world validity. But both authors are in a sense right: depending on the approach, some approaches can be considered scientific, others not.

    Link to this
  3. 3. zstansfi 10:55 am 08/14/2013

    Dr Tannenbaum, I’m glad to see that you’ve finally stumbled upon, at least a portion, of ticks so many people off about psychology.

    But instead of throwing around “operationalization” like it’s some magical and mystical word that washes away all sins, let’s get the facts straight.

    Psychologists and popular psychology writers have been abusing operational variables for years. Virtually all popular psychology writing and numerous high profile psychology papers make the error of effectively voiding the distinction between operational variables and the “fuzzy unquantifiable subject of interest”. These writers explicitly talk about “emotion x” instead of “emotion scale scores” or extraversion and neuroticism instead of “personality scale scores” or intelligence instead of “IQ scores” or attention instead of “attention performance measure x”, etc, etc and frequently fail to appropriately distinguish between these concepts.

    The problem is less what you study (“feeeelings”) than it is what people like yourself are saying about what is being studied-which often amounts to readily apparent nonsense. This problem is (sometimes) more egregious in psychology than in the “hard” sciences, because in these sciences the operational variables usually represent most of what we know about a concept (we don’t know what “gene expression” is like apart from our measurements). In psychology, I already have a pretty explicit understanding of what love is, and it probably conflicts with the simple operationalization of love as something that can be meaningfully approximated by Likert scale scores. (And even if this score has been shown to predict all of your other operational variables, this correlation still doesn’t necessarily imply that any of these are valid measures.)

    Thinking lay people are perfectly aware of these issues and are tired of reading obviously spurious tripe about how people think based what some small sample of undergraduates reported on a Likert scale.

    Bring the conclusions about human psychology in line with the weaknesses of psychological measurements and stop highlighting the lowest common denominator in popular psychology writing and the problem will be solved.

    Oh, and your comparison to theoretical physics is far from apt. Physicists have been known to develop detailed mathematical theories which make accurate predictions about subsequent and robust experimental findings. Social psychologists are better known for stealing the mathematics of physics to buttress unrelated psychological theories (

    Conclusion: don’t cite junk, get skeptics off backs of psychologists.

    Link to this
  4. 4. Bashir 10:57 am 08/14/2013

    There is often a bit of idealizing of other science areas, physics in particular. I highly recommend reading a bit of history on physics, especially early 20th century with regard to atomic structure & electrons. It was messy as hell.

    Link to this
  5. 5. Melanie Tannenbaum in reply to Melanie Tannenbaum 12:05 pm 08/14/2013


    Your comment is precisely why I wrote the following:

    “Rather than throwing out psychology, why don’t we throw out sensationalist headlines that undermine how much hard work goes into psychological operationalization in the first place? If only we science writers — myself included — would pay more attention to how we treat operationalization when we write about psychological research, I think we could all be amazed at how much more information we would glean from these discussions.”

    You seem to interpret this as an idealization of operationalization. This was not my intention. I was actually suggesting that we do the very thing that you are recommending — being more careful to make note of the operational definition being used as part of the science writing/reporting process. For example, as you note, saying “IQ scores” instead of “intelligence.”

    I do believe that we are actually on the same page here. And this is not the first time that I’ve become aware of what people dislike about psychological research, it is just the first time that I have publicly commented on it.

    Link to this
  6. 6. rkipling 1:13 pm 08/14/2013

    Wo-o-o feelings
    Wo-o-o feelings ……

    But let me back up to the beginning of this story. And before I start, I don’t claim any of this as original material in case someone references one of the authors who told it before me and were better in the telling. Further, most of everyone here already knows about the evolution of the universe. The next three paragraphs are to catch up those who don’t. There is a point to it later.

    In a time before time, there was void and darkness. Then there was time, space, and other stuff. Some of the stuff cooled and formed stars. Then there was light. Stars turned hydrogen into all the heavier elements in various ways at different speeds depending on how massive they were. Larger stars blew up. This happened for a while distributing carbon, oxygen, iron, etc. into space.

    After a while longer, leftovers from exploded stars formed into next generation stars, planets, moons, and all that other stuff floating around in space. Various planetoids accreted to form the Earth. When the earth cooled enough for water to condense, chemicals dissolved in water. Not very long after that in geologic time, some of the chemicals combined into living organisms. Those organisms evolved into all sorts including some that chased each other around chomping on one another.

    These critters, made out of star stuff, got more and more complex until one group of them woke up. On that day, the universe looked up and wondered? We aren’t just the World. We are the star children. We are the universe looking back at itself. And we wonder has this happened before?

    Curious beasts that we are, we want to know who, what, when, where, why and how. The scientific method has helped with all that. As I understand it, psychology is trying to understand the why part. Why we exhibit certain behaviors and have particular feelings seems to be among the most complex of questions.

    Psychologists appear to be objectively applying the scientific method to these questions as fast as they can. Faulting them for not having already invented experimental methods on a par with chemistry displays a lack of appreciation for the level of difficulty. After all on Carl Sagan’s 24 hour Cosmos clock, we have only been awake for a little over one second. Give them a minute. Disrespecting psychologists just ticks them off. It doesn’t hurry them along.

    What’s Love Got to Do With It? Probably something. I have a feeling they will figure that out too.

    Link to this
  7. 7. mhwolf 5:32 pm 08/14/2013

    The problem, IMO is not that psychology is not a science, but that many, if not most, psychologists are not trained to be scientists. Psychologists need to have a stronger background in the physical sciences, especially biology. Moreover, psychology, as a discipline, should accept that psychology is an emergent property of human biology and neurobiology. Psychologists should also have a much stronger background in statistics and especially in experimental design. Psychologists should also accept that their subjects can, and will, cheat, consciously or unconsciously.
    On the other hand, medical doctors should also get that same training. Overall, it seems that the major entry of pseudoscience into “respectability” has been through poorly constructed and inadequately controlled experiments by psychologists or people with MD’s.

    Link to this
  8. 8. theirmind 3:20 am 08/15/2013

    I don’t know psychology is a science, or not ,to see how you define psychology and science, or spirituality, theology?

    Link to this
  9. 9. zstansfi 7:02 pm 08/16/2013

    @Melanie Tannenbaum:

    In fact, I didn’t mean to argue that you were idealizing operationalization–rather that you are using a patronizing explanation of how psychologists measure difficult-to-quantify variables in order to straw man legitimate criticisms of psychological claims. Most criticisms of psychology do not arise from people who fail to understand that psychology uses operational variables–they come from people who are fed up with reading papers written by psychologists and pop psych writers who explicitly equate these variables with actual human psychology.

    If you want to talk about naive and blatant reductionism: this is it.

    Moreover, I haven’t read enough of your writing to know whether you also fall into this trap consistently or only infrequently (I can only recall reading one of your articles that clearly over-extends the implications of psychological research), but your network, Scientific American, is an egregious violator. I’m not saying that every piece that shows up on Sci Am’s doorstep is non-sense, but there’s pretty of junk over here.

    And, yes, I did read (and re-read) that paragraph which you quoted to me, and I do agree with it. So why don’t we start by promoting good psychology reporting on Sci Am?

    Here is a great example from this week about how not to write sensibly about psychology–it presents some reasonable research (and some not) with a clearly ridiculous spin:

    Link to this
  10. 10. Melanie Tannenbaum in reply to Melanie Tannenbaum 2:52 pm 08/18/2013


    Well, now you’re just being rude. I’m good at what I do, which is why I’m here. And my explanation wasn’t patronizing, nor was it a straw man. The argument I was specifically rebutting was a straw man in itself, as Berezow pretended that all we do is say “I want to measure happiness. Sooo…how happy are you, scale of 1 to 5? 3.7? Cool!” This piece was not a manifesto intended to rebut every single criticism, legitimate or otherwise, of psychological science. And, as many on Twitter have noted, if any critics *actually* want to critique specific lines of research or specific psychological researchers, they would be in good company among psychologists ourselves, as we (as a group) have been spending ENDLESS amounts of time (particularly over the past 2 years) doing the same. But, again. That’s not what happened in the piece I was arguing against. What happened in the piece was, roughly, “This is one example (that I’ve made up) and I think it sounds stupid. So, psychology is stupid and not a science.” That’s patronizing. And problematic. And a straw man in itself. So don’t criticize me for arguing against this one specific point because I didn’t also decide of my own volition to spend an additional 10,000 words bringing up other criticisms that were not mentioned in the original article and rebutting those. That would be poor, rambling, unfocused writing, now, wouldn’t it?

    Finally, if you don’t like my writing or the reporting on SciAm, I suggest you stop reading. That’s an easy solution to your problem.

    Link to this
  11. 11. rkipling 1:17 pm 08/20/2013

    @Melanie Tannenbaum:

    I lack qualification to evaluate your proficiency in psychology. The clarity of your writing and thinking is unmatched on this site, in my opinion.

    I can’t tell if some of these (let me call them foolish rather than the harsher description of fools) commenters actually get to you or if you are using them as foils. You need no defenders. But if they do illicit any emotional response from you, I argue they don’t merit it.

    Link to this
  12. 12. thiagodaluz7 1:39 pm 08/20/2013

    Great post. I’m always a little sorry to find these things online, and wish that psychologists in Edmonton were a little more entertaining this way. Maybe entertaining is the wrong word, but it’s as close as I can manage.

    Link to this
  13. 13. rkipling 5:52 pm 08/20/2013

    I’m unsure? Is @12 psychologist trash talk?

    Link to this
  14. 14. Melanie Tannenbaum in reply to Melanie Tannenbaum 8:37 pm 08/20/2013

    Don’t believe so, no. I can’t tell if it’s legit or if it’s spam given the link? But I don’t think it’s trash talk, at any rate.

    Link to this
  15. 15. rkipling 11:46 pm 08/20/2013

    One of their patients perhaps? A bit incoherent. No matter.

    Link to this
  16. 16. DavidPeterzellPhDPhD 7:00 pm 05/28/2014

    After 35 years of research and teaching in psychological science, psychophysics, behavioral science, and cognitive neuroscience, I am always astonished by people who should know better claiming that psychology isn’t a science, or (worse) can’t be a science. I’m glad you have the energy and patience to tackle this, and I wish you the best in your efforts to present what you know in a way that is convincing to deniers.

    I might add that it has been 180 years since the Weber brothers and Fechner began to present psychophysical research, and 154 years since the publication of Fechner’s classic Elements of Psychophysics. The work of those days still has scientific merit to those who actually take the time to examine it, and psychophysics has been part of the research programs of more than a few Nobel laureates. Psychophysical methods hardly need to be defended as scientific, and I’m guessing that the deniers are ignorant of them.

    Dave Peterzell

    Link to this
  17. 17. ryaron 1:54 pm 10/9/2014

    You may go through the motions.
    This still does not make it a science.

    Link to this
  18. 18. ryaron 10:02 am 11/15/2014

    Eight scales to measure temperature?
    Schroedinger’s cat experiment?
    I am always amazed by scientific illiterates
    Bringing examples from physics of which they have no clue.
    There is only one scale to measure temperature
    Thermodynamic Kelvin scale in SI units and Rankin in imperial units. Both scales measure however the same thing.
    The laws of physics remain unchanged when we switch scales.
    Schroedinger’s cat experiment is a gedanken experiment.
    It is meant to test the Copenhagen school of quantum mechanics.
    It is not a feasible actual experiment.

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article