Skip to main content

Gender bias: ethical implications of an empirical finding.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


By now, you may have seen the recently published study by Ross-Macusin et al. in the Proceedings of the National Academy of Sciences entitled "Science faculty's subtle gender biases favor male students", or the nice discussion by Ilana Yurkiewicz of why these findings matter.

Briefly, the study involved having science faculty from research-focused universities rate materials from potential student candidates for a lab manager position. The researchers attached names to the application materials -- some of them male names, some of them female names -- at random, and examined how the ratings of the materials correlated with the names that were attached to them. What they found was that the same application materials got a higher ranking (i.e., a judgment that the applicant would be more qualified for the job) when the attached name was male than when it was female. Moreover, both male and female faculty ranked the same application more highly when attached to a male name.

It strikes me that there are some ethical implications that flow from this study to which scientists (among others) should attend:


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


  1. Confidence that your judgments are objective is not a guarantee that your judgments are objective, and your intent to be unbiased may not be enough. The results of this study show a pattern of difference in ratings for which the only plausible explanation is the presence of a male name or a female name for the applicant. The faculty members treated the task they were doing as an objective evaluation of candidates based on prior research experience, faculty recommendations, the applicant's statement, GRE scores, and so forth -- that they were sorting out the well-qualified from the less-well-qualified -- but they didn't do that sorting solely on the basis of the actual experience and qualifications described in the application materials. If they had, the rankings wouldn't have displayed the gendered split they did. The faculty in the study undoubtedly did not mean to bring gender bias to the evaluative task, but the results show that they did, whether they intended to or not.

  2. If you want to build reliable knowledge about the world, it's helpful to identify your biases so they don't end up getting mistaken for objective findings. As I've mentioned before, objectivity is hard. One of the hardest things about being objective is that fact that so many of our biases are unconscious -- we don't realize that we have them. If you don't realize that you have a bias, it's much harder to keep that bias from creeping in to your knowledge-building, from the way you frame the question you're exploring to how you interpret data and draw conclusions from them. The biases you know about are easier to keep on a short leash.

  3. If a methodologically sound study finds that science faculty have a particular kind of bias, and if you are science faculty, you probably should assume that you might also have that bias. If you happen to have good independent evidence that you do not display the particular bias in question, that's great -- one less unconscious bias that might be messing with your objectivity. However, in the absence of such good independent evidence, the safest assumption to make is that you're vulnerable to the bias too -- even if you don't feel like you are.

  4. If you doubt the methodologically soundness of a study finding that science faculty have a particular kind of bias, it is your responsibility to identify the methodological flaws. Ideally, you'd also want to communicate with the authors of the study, and with other researchers in the field, about the flaws you've identified in the study methodology. This is how scientific communities work together to build a reliable body of knowledge we all can use. And, a responsible scientist doesn't reject the conclusions of a study just because they don't match one's hunches about how things are. The evidence is how scientists know anything.

  5. If there's reason to believe you have a particular kind of bias, there's reason to examine what kinds of judgments of yours it might influence beyond the narrow scope of the experimental study. Could gender bias influence whose data in your lab you trust the most? Which researchers in your field you take most seriously? Which theories or discoveries are taken to be important, and which others are taken to be not-so-important? If so, you have to be honest with yourself and recognize the potential for this bias to interfere with your interaction with the phenomena, and with your interaction with other scientists to tackle scientific questions and build knowledge. If you're committed to building reliable knowledge, you need to find ways to expose the operation of this bias, or to counteract its effects. (Also, to the extent that this bias might play a role in the distribution of rewards like jobs or grants in scientific careers, being honest with yourself probably means acknowledging that the scientific community does not operate as a perfect meritocracy.)

Each of these acknowledgments looks small on its own, but I will not pretend that that makes them easy. I trust that this won't be a deal-breaker. Scientists do lots of hard things, and people committed to building reliable knowledge about the world should be ready to take on pieces of self-knowledge relevant to that knowledge-building. Even when they hurt.