Skip to main content

Who matters (or should) when scientists engage in ethical decision-making?

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


One of the courses I teach regularly at my university is "Ethics in Science," a course that explores (among other things) what's involved in being a good scientist in one's interactions with the phenomena about which one is building knowledge, in one's interactions with other scientists, and in one's interactions with the rest of the world.

Some bits of this are pretty straightforward (e.g., don't make up data out of whole cloth, don't smash your competitor's lab apparatus, don't use your mad science skillz to engage in a campaign of super-villainy that brings Gotham City to its knees). But, there are other instances where what a scientist should or should not do is less straightforward. This is why we spend significant time and effort talking about -- and practicing -- ethical decision-making (working with a strategy drawn from Muriel J. Bebeau, "Developing a Well-Reasoned Response to a Moral Problem in Scientific Research"). Here's how I described the basic approach in a post of yore:

Ethical decision-making involves more than having the right gut-feeling and acting on it. Rather, when done right, it involves moving past your gut-feeling to see who else has a stake in what you do (or don't do); what consequences, good or bad, might flow from the various courses of action available to you; to whom you have obligations that will be satisfied or ignored by your action; and how the relevant obligations and interests pull you in different directions as you try to make the best decision. Sometimes it's helpful to think of the competing obligations and interests as vectors, since they come with both directions and magnitudes -- which is to say, in some cases where they may be pulling you in opposite directions, it's still obvious which way you should go because the magnitude of one of the obligations is so much bigger than of the others.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


We practice this basic strategy by using it to look at a lot of case studies. Basically, the cases describe a situation where the protagonist is trying to figure out what to do, giving you a bunch of details that seem salient to the protagonist and leaving some interesting gaps where the protagonist maybe doesn't have some crucial information, or hasn't looked for it, or hasn't thought to look for it. Then we look at the interested parties, the potential consequences, the protagonist's obligations, and the big conflicts between obligations and interests to try to work out what we think the protagonist should do.

Recently, one of my students objected to how we approach these cases.

Specifically, the student argued that we should radically restrict our consideration of interested parties -- probably to no more than the actual people identified by name in the case study. Considering the interests of a university department, or of a federal funder, or of the scientific community, the student asserted, made the protagonist responsible to so many entities that the explicit information in the case study was not sufficient to identify the correct course of action.*

And, the student argued, one interested party that it was utterly inappropriate for a scientist to include in thinking through an ethical decision is the public.

Of course, I reminded the student of some reasons you might think the public would have an interest in what scientists decide to do. Members of the public share a world with scientists, and scientific discoveries and scientific activities can have impacts on things like our environment, the safety of our buildings, what our health care providers know and what treatments they are able to offer us, and so forth. Moreover, at least in the U.S., public funds play an essential role in supporting both scientific research and the training of new scientists (even at private universities) -- which means that it's hard to find an ethical decision-making situation in a scientific training environment that is completely isolated from something the public paid for.

My student was not moved by the suggestion that financial involvement should buy the public any special consideration as a scientist was trying to decide the right thing to do.

Indeed, central to the student's argument was the idea that the interests of the public, whether with respect to science or anything else, are just too heterogeneous. Members of the public want lots of different things. Taking these interests into account could only be a distraction.

As well, the student asserted, too small a proportion of the public actually cares about what scientists are up to that the public, even if it were more homogeneous, ought to be taken into account by the scientists grappling with their own ethical quandaries. Even worse, the student ventured, those that do care what scientists are up to are not necessarily well-informed.

I'm not unsympathetic to the objection to the extreme case here: if a scientist felt required to somehow take into account the actual particular interests of each individual member of the public, that would make it well nigh impossible to actually make an ethical decision without the use of modeling methods and supercomputers (and even then, maybe not). However, it strikes me that it shouldn't be totally impossible to anticipate some reasonable range of interests non-scientists have that might be impacted by the consequences of a scientist's decision in various ways. Which is to say, the lack of total fine-grained information about the public, or of complete predictability of the public's reactions, would surely make it more challenging to make optimal ethical decisions, but these challenges don't seem to warrant ignoring the public altogether just so the problem you're trying to solve becomes more tractable.

In any case, I figure that there's a good chance some members of the public** may be reading this post. To you, I pose the following questions:

  1. Do you feel like you have an interest in what science and scientists are up to? If so, how would you describe that interest? If not, why not?

  2. Do you think scientists should treat "the public" as an interested party when they try to make ethical decisions? Why or why not?

  3. If you think scientists should treat "the public" as an interested party when they try to make ethical decisions, what should scientists be doing to get an accurate read on the public's interests?

  4. And, for the sake of symmetry, do you think members of the public ought to take account of the interests of science or scientists when they try to make ethical decisions? Why or why not?

If, for some reason, you feel like chiming in on these questions in the comments would expose you to unwanted blowback, you can also email me your responses (dr dot freeride at gmail dot com) for me to anonymize and post on your behalf.

Thanks in advance for sharing your view on this!

_____

*Here I should note that I view the ambiguities within the case studies as a feature, not a bug. In real life, we have to make good ethical decisions despite uncertainties about what consequences will actually follow our actions, for example. Those are the breaks.

**Officially, scientists are also members of the public -- even if you're stuck in the lab most of the time!