Skip to main content

When focusing on individual responsibility obscures shared responsibility.

Over many years of writing about ethics in the conduct of science, I’ve had occasion to consider many cases of scientific misconduct and misbehavior, instances of honest mistakes and culpable mistakes.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Over many years of writing about ethics in the conduct of science, I've had occasion to consider many cases of scientific misconduct and misbehavior, instances of honest mistakes and culpable mistakes. Discussions of these cases in the media and among scientists often make them look aberrant, singular, unconnected -- the Schön case, the Hauser case, Aetogate, the Sezen-Sames case, the Hwang Woo-Sook case, the Stapel case, the Van Parijs case.* They make the world of science look binary, a set of unproblematically ethical practitioners with a handful of evil interlopers who need only be identified and rooted out.

I don't think this approach is helpful, either in preventing misconduct, misbehavior, and mistakes, or in mounting a sensible response to the people involved in them.

Indeed, despite the fact that scientific knowledge-building is inherently a cooperative activity, the tendency to focus on individual responsibility can manifest itself in assignment of individual blame on people who "should have known" that another individual was involved in misconduct or culpable mistakes. It seems that something like this view -- whether imposed from without or from within -- may have been a factor in the recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


While there seems to be widespread suspicion that the lead-author of the STAP cell papers, Haruko Obokata, may have engaged in research misconduct of some sort (something Obokata has denied), Sasai was not himself accused of research misconduct. However, in his role as an advisor to Obokata, Sasai was held responsible by Riken's investigation for not confirming Obokata's data. Sasai expressed shame over the problems in the retracted papers, and had been hospitalized prior to his suicide in connection to stress over the scandal.

Michael Eisen describes the similarities here to his own father's suicide as a researcher at NIH caught up in the investigation of fraud committed by a member of his lab:

[A]s the senior scientists involved, both Sasai and my father bore the brunt of the institutional criticism, and both seem to have been far more disturbed by it than the people who actually committed the fraud.

It is impossible to know why they both responded to situations where they apparently did nothing wrong by killing themselves. But it is hard for me not to place at least part of the blame on the way the scientific community responds to scientific misconduct.

This response, Eisen notes, goes beyond rooting out the errors in the scientific record and extends to rooting out all the people connected to the misconduct event, on the assumption that fraud is caused by easily identifiable -- and removable -- individuals, something that can be cut out precisely like a tumor, leaving the rest of the scientific community free of the cancer. But Eisen doesn't believe this model of the problem is accurate, and he notes the damage it can do to people like Sasai and like his own father:

Imagine what it must be like to have devoted your life to science, and then to discover that someone in your midst – someone you have some role in supervising – has committed the ultimate scientific sin. That in and of itself must be disturbing enough. Indeed I remember how upset my father was as he was trying to prove that fraud had taken place. But then imagine what it must feel like to all of a sudden become the focal point for scrutiny – to experience your colleagues and your field casting you aside. It must feel like your whole world is collapsing around you, and not everybody has the mental strength to deal with that.

Of course everyone will point out that Sasai was overreacting – just as they did with my father. Neither was accused of anything. But that is bullshit. We DO act like everyone involved in cases of fraud is responsible. We do this because when fraud happens, we want it to be a singularity. We are all so confident this could never happen to us, that it must be that somebody in a position of power was lax – the environment was flawed. It is there in the institutional response. And it is there in the whispers …

Given the horrible incentive structure we have in science today - Haruko Obokata knew that a splashy result would get a Nature paper and make her famous and secure her career if only she got that one result showing that you could create stem cells by dipping normal cells in acid – it is somewhat of a miracle that more people don’t make up results on a routine basis. It is important that we identify, and come down hard, on people who cheat (although I wish this would include the far greater number of people who overhype their results – something that is ultimately more damaging than the small number of people who out and out commit fraud).

But the next time something like this happens, I am begging you to please be careful about how you respond. Recognize that, while invariably fraud involves a failure not just of honesty but of oversight, most of the people involved are honest, decent scientists, and that witch hunts meant to pretend that this kind of thing could not happen to all of us are not just gross and unseemly - they can, and sadly do, often kill.

As I read him, Eisen is doing at least a few things here. He is suggesting that a desire on the part of scientists for fraud to be a singularity -- something that happens "over there" at the hands of someone else who is bad -- means that they will draw a circle around the fraud and hold everyone on the inside of that circle (and no one outside of it) accountable. He's also arguing that the inside/outside boundary inappropriately lumps the falsifiers, fabricators, and plagiarists with those who have committed the lesser sin of not providing sufficient oversight. He is pointing out the irony that those who have erred by not providing sufficient oversight tend to carry more guilt than do those they were working with who have lied outright to their scientific peers. And he is suggesting that needed efforts to correct the scientific record and to protect the scientific community from dishonest researchers can have tragic results for people who are arguably less culpable.

Indeed, if we describe Sasai's failure as a failure of oversight, it suggests that there is some clear benchmark for sufficient oversight in scientific research collaborations. But it can be very hard to recognize that what seemed like a reasonable level of oversight was insufficient until someone who you're supervising or with whom you're collaborating is caught in misbehavior or a mistake. (That amount of oversight might well have been sufficient if the person one was supervising chose to behave honestly, for example.) There are limits here. Unless you're shadowing colleagues 24/7, oversight depends on some baseline level of trust, some presumption that one's colleagues are behaving honestly rather than dishonestly.

Eisen's framing of the problem, though, is still largely in terms of the individual responsibility of fraudsters (and over-hypers). This prompts arguments in response about individuals bearing responsibility for their actions and their effects (including the effects of public discussion of those actions and about the individual scientists who are arguably victims of data fabrication and fraud. We are still in the realm of conceiving of fraudsters as "other" rather than recognizing that honest, decent scientists may be only a few bad decisions away from those they cast as monsters.

And we're still describing the problem in terms of individual circumstances, individual choices, and individual failures.

I think Eisen is actually on the road to pointing out that a focus primarily on the individual level is unhelpful when he points to the problems of the scientific incentive structure. But I think it's important to explicitly raise the alternate model, that fraud also flows from a collective failure of the scientific community and of the social structures it has built -- what is valued, what is rewarded, what is tolerated, what is punished.

Arguably, one of the social structures implicated in scientific fraud is the first across the finish line, first to publish in a high impact journal model of scientific achievement. When being second to a discovery counts for exactly nothing (after lots of time, effort, and other resources have been invested), there is much incentive for haste and corner-cutting, and sometimes even outright fraud. This provides temptations for researchers -- and dangers for those providing oversight to ambitious colleagues who may fall prey to such temptations. But while misconduct involves individuals making bad decisions, it happens in the context of a reward structure that exists because of collective choices and behaviors. If the structures that result from those collective choices and behaviors make some kinds of individual choices that are pathological to the shared project (building knowledge) rational choices for the individual to make under the circumstances (because they help the individual secure the reward), the community probably has an interest in examining the structures it has built.

Similarly, there are pathological individual choices (like ignoring or covering up someone else's misconduct) that seem rational if the social structures built by the scientific community don't enable a clear path forward within the community for scientists who have erred (whether culpably or honestly). Scientists are human. They get attached to their colleagues and tend to believe them to be capable of learning from their mistakes. Also, they notice that blowing the whistle on misconduct can lead to isolation of the whistleblower, not just the people committing the misconduct. Arguably, these are failures of the community and of the social structures it has built.

We might even go a step further and consider whether insisting on talking about scientific behavior (and misbehavior) solely in terms of individual actions and individual responsibility is part of the problem.

Seeing the scientific enterprise and things that happen in connection with it in terms of heroes and villains and innocent bystanders can seem very natural. Taking this view also makes it look like the most rational choice for scientists to plot their individual courses within the status quo. The rules, the reward structures, are taken almost as if they were carved in granite. How could one person change them? What would be the point of opting out of publishing in the high impact journals, since it would surely only hurt the individual opting out while leaving the system intact? In a competition for individual prestige and credit for knowledge built, what could be the point of pausing to try to learn something from the culpable mistakes committed by other individuals rather than simply removing those other individuals from the competition?

But individual scientists are not working in isolation against a fixed backdrop. Treating their social structures as if they were a fixed backdrop not only obscures that these structures result from collective choices but also prevents scientists from thinking together about other ways the institutional practice of science could be.

Whether some of the alternative arrangements they could create might be better than the status quo -- from the point of view of coordinating scientific efforts, improving scientists' quality of life, or improving the quality of the body of knowledge scientist are building -- is surely an empirical question. But just as surely it is an empirical question worth exploring.

______

* It's worth noticing that failures of safety are also frequently characterized as singular events, as in the Sheri Sangji/Patrick Harran case. As I've discussed at length on this blog, there is no reason to imagine the conditions in Harran's lab that led to Sangji's death were unique, and there is plenty of reason for the community of academic researchers to try to cultivate a culture of safety rather than individually hoping their own good luck will hold.