ADVERTISEMENT
  About the SA Blog Network













Doing Good Science

Doing Good Science


Building knowledge, training new scientists, sharing a world.
Doing Good Science Home

Gender bias: ethical implications of an empirical finding.

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



By now, you may have seen the recently published study by Ross-Macusin et al. in the Proceedings of the National Academy of Sciences entitled “Science faculty’s subtle gender biases favor male students”, or the nice discussion by Ilana Yurkiewicz of why these findings matter.

Briefly, the study involved having science faculty from research-focused universities rate materials from potential student candidates for a lab manager position. The researchers attached names to the application materials — some of them male names, some of them female names — at random, and examined how the ratings of the materials correlated with the names that were attached to them. What they found was that the same application materials got a higher ranking (i.e., a judgment that the applicant would be more qualified for the job) when the attached name was male than when it was female. Moreover, both male and female faculty ranked the same application more highly when attached to a male name.

It strikes me that there are some ethical implications that flow from this study to which scientists (among others) should attend:

  1. Confidence that your judgments are objective is not a guarantee that your judgments are objective, and your intent to be unbiased may not be enough. The results of this study show a pattern of difference in ratings for which the only plausible explanation is the presence of a male name or a female name for the applicant. The faculty members treated the task they were doing as an objective evaluation of candidates based on prior research experience, faculty recommendations, the applicant’s statement, GRE scores, and so forth — that they were sorting out the well-qualified from the less-well-qualified — but they didn’t do that sorting solely on the basis of the actual experience and qualifications described in the application materials. If they had, the rankings wouldn’t have displayed the gendered split they did. The faculty in the study undoubtedly did not mean to bring gender bias to the evaluative task, but the results show that they did, whether they intended to or not.
  2. If you want to build reliable knowledge about the world, it’s helpful to identify your biases so they don’t end up getting mistaken for objective findings. As I’ve mentioned before, objectivity is hard. One of the hardest things about being objective is that fact that so many of our biases are unconscious — we don’t realize that we have them. If you don’t realize that you have a bias, it’s much harder to keep that bias from creeping in to your knowledge-building, from the way you frame the question you’re exploring to how you interpret data and draw conclusions from them. The biases you know about are easier to keep on a short leash.
  3. If a methodologically sound study finds that science faculty have a particular kind of bias, and if you are science faculty, you probably should assume that you might also have that bias. If you happen to have good independent evidence that you do not display the particular bias in question, that’s great — one less unconscious bias that might be messing with your objectivity. However, in the absence of such good independent evidence, the safest assumption to make is that you’re vulnerable to the bias too — even if you don’t feel like you are.
  4. If you doubt the methodologically soundness of a study finding that science faculty have a particular kind of bias, it is your responsibility to identify the methodological flaws. Ideally, you’d also want to communicate with the authors of the study, and with other researchers in the field, about the flaws you’ve identified in the study methodology. This is how scientific communities work together to build a reliable body of knowledge we all can use. And, a responsible scientist doesn’t reject the conclusions of a study just because they don’t match one’s hunches about how things are. The evidence is how scientists know anything.
  5. If there’s reason to believe you have a particular kind of bias, there’s reason to examine what kinds of judgments of yours it might influence beyond the narrow scope of the experimental study. Could gender bias influence whose data in your lab you trust the most? Which researchers in your field you take most seriously? Which theories or discoveries are taken to be important, and which others are taken to be not-so-important? If so, you have to be honest with yourself and recognize the potential for this bias to interfere with your interaction with the phenomena, and with your interaction with other scientists to tackle scientific questions and build knowledge. If you’re committed to building reliable knowledge, you need to find ways to expose the operation of this bias, or to counteract its effects. (Also, to the extent that this bias might play a role in the distribution of rewards like jobs or grants in scientific careers, being honest with yourself probably means acknowledging that the scientific community does not operate as a perfect meritocracy.)

Each of these acknowledgments looks small on its own, but I will not pretend that that makes them easy. I trust that this won’t be a deal-breaker. Scientists do lots of hard things, and people committed to building reliable knowledge about the world should be ready to take on pieces of self-knowledge relevant to that knowledge-building. Even when they hurt.

Janet D. Stemwedel About the Author: Janet D. Stemwedel is an Associate Professor of Philosophy at San José State University. Her explorations of ethics, scientific knowledge-building, and how they are intertwined are informed by her misspent scientific youth as a physical chemist. Follow on Twitter @docfreeride.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 10 Comments

Add Comment
  1. 1. leggedfish 1:18 am 09/28/2012

    The one thing I wondered about this experiment is what names were used? Were they the similar equivalents of the male and female names (with a few others mixed in so it wouldn’t be obvious)? People may just be biased against certain names, not necessarily having to do with the gender per se. It would be interesting to repeat the experiment with that in mind.

    Link to this
  2. 2. Janet D. Stemwedel in reply to Janet D. Stemwedel 10:31 am 09/28/2012

    The supplemental information indicates that the male name used was John and the female name used was Jennifer. Both are common enough names in the US that someone with a strong bias against one of those names might have difficulties.

    Link to this
  3. 3. TTLG 1:57 pm 09/28/2012

    The list of biases to acknowledge is certainly good, especially #1. But I think an important question is why these biases are coming into play in the first place. My guess is that the resumes do not completely address all the the necessary qualifications for the job (some of which may not have actually been explicitly stated, like ability to get along with the boss). So the reviewers may be “filling in the blanks” using average behavior based on their past experience with people who have that person’s race/gender/etc. This may actually give the best chance of getting a good person for the job, but hardly seems fair.

    One possible way to get around this is to have the reviewers give ratings for each specific required skill instead of one general rating. This should get more cognitive rather than intuitive thinking which would hopefully tend to disengage emotional biases. If nothing else, it might at least identify the areas where the biases are getting applied, like ability to get along with coworkers. Since the reviewers are scientists, each rating could also include a confidence factor, which could also encourage them to think about just how much they really know.

    Link to this
  4. 4. lpwagner 4:57 pm 09/28/2012

    What about if the names were Lauren and Loren? Same sounding name, but one usually goes with women; the other with men. This along the lines of the methodological suggestion of leggedfish.

    Link to this
  5. 5. jaia 3:56 pm 09/29/2012

    Interesting comment, leggedfish! According to the Social Security Administration, the names Jennifer and John were similar in nationwide popularity in 1990, although there may be regional differences. (I went to high school with three Jennifers and no Johns.) Still, my subjective reaction to the names is that John is a strong, serious, classic name, but Jennifer is somewhat nondescript and a bit trendy and lightweight. (Apologies to all the real Jennifers out there!) I wonder what would have happened if they had used Justin or Ryan for the male name or Sarah or Elizabeth for the female name — or just used a variety of names. Time for a follow-up study! :-)

    Link to this
  6. 6. Janet D. Stemwedel in reply to Janet D. Stemwedel 8:17 pm 09/29/2012

    Given that one’s given name is about as far out of one’s control as is one’s gender, it’s intriguing that science faculty might attach more weight to their (subjective) associations with those names than they would to job-relevant data on a job application. And if people who are supposed to be attentive to, you know, empirical data have this much trouble, how much worse is it likely to be out in the wider world?

    Link to this
  7. 7. zstansfi 3:53 pm 09/30/2012

    “probably means acknowledging that the scientific community does not operate as a perfect meritocracy”

    I imagine one would have to be pretty deluded to believe this, even without a PNAS paper as empirical support. The concept of a “meritocracy” is inherently flawed, as it presumes that merit is somehow a valid construct and that it is possible to select for and reward those who have it. If we look at current research trends (e.g. http://www.nature.com/news/specials/phdfuture/index.html), it is evident that most current and future PhDs will never acquire permanent research jobs, and many competent (possibly, excellent) candidates will be selected against not because they lack “merit” but due to the vagaries of modern science training (most obviously: too much supply, too little demand).

    Gender bias is undoubtedly worth addressing as part of the bigger picture, but please let’s keep this flawed, dystopian concept of meritocracy out of the discussion.

    Link to this
  8. 8. cgschmidt 8:13 am 10/3/2012

    I wonder whether gender bias of this kind isn’t overwhelmed by gender biases from context — which, I presume, the experiment removed. Let’s assume that the selection of a lab manager was intented to be fair and that only unconscious bias was at work. How would your choice be affected if you currently have (or historically have had) an imbalance in gender? I would think (and from personal experience, have seen) context of this kind to have an overpowering influence on gender in hiring decisions. And I’m not even sure it’s wrong. But whether it is or not, I do think it is a much more powerful bias than the one this experiment exposes and would nullify it’s importance in nearly all situations.

    Link to this
  9. 9. Janet D. Stemwedel in reply to Janet D. Stemwedel 12:49 pm 10/3/2012

    @cgschmidt I’m not clear on which direction your hypothesized larger-magnitude bias would go. Is the idea that historical gender imbalances would bias hiring towards replicating those biased ratios? Or that historical gender imbalances would bias hiring towards achieving less imbalanced ratios?

    If the latter, my guess is that the bias you have in mind cannot be that overwhelming, or we would see much more balanced gender representation among faculty in college and university science departments than we do at present.

    Link to this
  10. 10. cgschmidt 1:50 am 10/4/2012

    Let’s assume a person that is making a hiring decision is consciously aware of the dangers of gender bias and is doing their best to avoid them. The experiments you cite indicate that it is likely that this person still harbors some unconscious gender bias. I find this to be fully believable and that it would be evident in an experiment where the resume evaluations are made out of any context.

    However, in virtually any real life situation there will always be factors like the current gender balance of a department or the gender of recent hires. For a person who is consciously trying to avoid gender bias in hiring, I believe these factors are a very powerful influence on the evaluation of resumes. And to be clear, that effect has to be in the direction of achieving gender balance. If it were not, the source of the bias would be conscious, not unconscious.

    So, why do we not have more balanced gender representation amoung faculty in science departments? I don’t have data to offer, but I suspect the reasons have little to do with the unconscious bias of people trying to do the right thing, and a lot to do with factors such as conscious bias or a gender imbalance in the pool of applicants.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Holiday Sale

Give a Gift &
Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now! >

X

Email this Article

X