Doing Good Science

Doing Good Science

Building knowledge, training new scientists, sharing a world.

The ethics of naming and shaming.


Lately I've been pondering the practice of responding to bad behavior by calling public attention to it.

The most recent impetus for my thinking about it was this tech blogger's response to behavior that felt unwelcoming at a conference (behavior that seems, in fact, to have run afoul of that conference's official written policies)*, but there are plenty of other examples one might find of "naming and shaming": the discussion (on blogs and in other media outlets) of University of Chicago neuroscientist Dario Maestripieri's comments about female attendees of the Society for Neuroscience meeting, the Office of Research Integrity's posting of findings of scientific misconduct investigations, the occasional instructor who promises to publicly shame students who cheat in his class, and actually follows through on the promise.

There are many forms "naming-and-shaming" might take, and many types of behavior one might identify as problematic enough that they ought to be pointed out and attended to. But there seems to be a general worry that naming-and-shaming is an unethical tactic. Here, I want to explore that worry.

Presumably, the point of responding to bad behavior is because it's bad -- causing harm to individuals or a community (or both), undermining progress on a project or goal, and so forth. Responding to bad behavior can be useful if it stops bad behavior in progress and/or keeps similarly bad behavior from happening in the future. A response can also be useful in calling attention to the harm the behavior does (i.e., in making clear what's bad about the behavior). And, depending on the response, it can affirm the commitment of individuals or communities that the behavior in question actual is bad, and that the individuals or communities see themselves as having a real stake in reducing it.

Rules, professional codes, conference harassment policies -- these are some ways to specify at the outset what behaviors are not acceptable in the context of the meeting, game, work environment, or disciplinary pursuit. There are plenty of contexts, too, where there is no written-and-posted official enumeration of every type of unacceptable behavior. Sometimes communities make judgments on the fly about particular kinds of behavior. Sometimes, members of communities are not in agreement about these judgments, which might result in a thoughtful conversation within the community to try to come to some agreement, or the emergence of a rift that leads people to realize that the community was not as united as they once thought, or ruling on the "actual" badness or acceptability of the behavior by those within the community who can marshal the power to make such a ruling.

Sharing a world with people who are not you is complicated, after all.

Still, I hope we can agree that there are some behaviors that count as bad behaviors. Assuming we had an unambiguous example of someone engaging in such a behavior, should we respond? How should we respond? Do we have a duty to respond?

I frequently hear people declare that one should respond to bad behavior, but that one should do so privately. The idea here seems to be that letting the bad actor know that the behavior in question was bad, and should be stopped, is enough to ensure that it will be stopped -- and that the bad behavior must be a reflection of a gap in the bad actor's understanding.

If knowing that a behavior is bad (or against the rules) were enough to ensure that those with the relevant knowledge never engage in the behavior, though, it becomes difficult to explain the highly educated researchers who get caught fabricating or falsifying data or images, the legions of undergraduates who commit plagiarism despite detailed instructions on proper citation methods, the politicians who lie. If knowledge that a certain kind of behavior is unacceptable is not sufficient to prevent that behavior, responding effectively to bad behavior must involve more than telling the perpetrator of that behavior, "What you're doing is bad. Stop it."

This is where penalties may be helpful in responding to bad behavior -- get benched for the rest of the game, or fail the class, or get ejected from the conference, or become ineligible for funding for this many years. A penalty can convey that bad behavior is harmful enough to the endeavor or the community that its perpetrator needs a "time-out".

Sometimes the application of penalties needs to be private (e.g., when a law like the Family Education Rights and Privacy Act makes applying the penalty publicly illegal). But there are dangers in only dealing with bad behavior privately.

When fabrication, falsification, and plagiarism are "dealt with" privately, it can make it hard for a scientific community to identify papers in the scientific literature that they shouldn't trust or researchers who might be prone to slipping back into fabricating, falsifying, or plagiarizing if they think no one is watching. (It is worth noting that large ethical lapses are frequently part of an escalating pattern that started with smaller ethical infractions.)

Worse, if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community's commitment to calling it out. Keeping penalties (if any) under wraps can convey the message that the bad behavior is actually tolerated, that official policies against it are empty words.

And sometimes, there are instances where the people within an organization or community with the power to impose penalties on bad actors seem disinclined to actually address bad behavior, using the cover of privacy as a way to opt out of penalizing the bad actors or of addressing the bad behavior in any serious way.

What's a member of the community to do in such circumstances? Given that the bad behavior is bad because it has harmful effects on the community and its members, should those aware of the bad behavior call the community's attention to it, in the hopes that the community can respond to it (or that the community's scrutiny will encourage the bad actor to cease the bad behavior)?

Arguably, a community that is harmed by bad behavior has an interest in knowing when that behavior is happening, and who the bad actors are. As well, the community has an interest in stopping the bad behavior, in mitigating the harms it has already caused, and in discouraging further such behavior. Naming-and-shaming bad actors may be an effective way to secure these interests.

I don't think this means naming-and-shaming is the only possible way to secure these interests, nor that it is always the best way to do so. Sometimes, however, it's the tool that's available that seems likely to do the most good.

There's not a simple algorithm or litmus test that will tell you when shaming bad actors is the best course of action, but there are questions that are worth asking when assessing the options:

  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, goes unchallenged?
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged privately? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged publicly? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)

Challenging bad behavior is not without costs. Depending on your status within the community, challenging a bad actor may harm you more than the bad actor. However, not challenging bad behavior has costs, too. If the community and its members aren't prepared to deal with bad behavior when it happens, the community has to bear those costs.


* Let me be clear that this post is focused on the broader question of publicly calling out bad behavior rather than on the specific details of Adria Richards' response to the people behind her at the tech conference, whether she ought to have found their jokes unwelcoming, whether she ought to have responded to them the way she did, or what have you. Since this post is not about whether Adria Richards did everything right (or everything wrong) in that particular instance, I'm going to be quite ruthless in pruning comments that are focused on her particular circumstances or decisions. Indeed, commenters who make any attempt to use the comments here to issue threats of violence against Richards (of the sort she is receiving via social media as I compose this post), or against anyone else, will have their information (including IP address) forwarded to law enforcement.

If you're looking for my take on the details of the Adria Richards case, I'll have a post up on my other blog within the next 24 hours.

The views expressed are those of the author and are not necessarily those of Scientific American.

Share this Article:


You must sign in or register as a member to submit a comment.

Give a Gift &
Get a Gift - Free!

Give a 1 year subscription
as low as $14.99

Subscribe Now! >


Email this Article