About the SA Blog Network

Doing Good Science

Doing Good Science

Building knowledge, training new scientists, sharing a world.
Doing Good Science Home

Resistance to ethics instruction: considering the hypothesis that moral character is fixed.

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

This week I’ve been blogging about the resistance to required ethics coursework one sometimes sees in STEM* disciplines. As one reason for this resistance is the hunch that you can’t teach a person to be ethical once they’re past a certain (pre-college) age, my previous post noted that there’s a sizable body of research that supports ethics instruction as an intervention to help people behave more ethically.

But, as I mentioned in that post, the intuition that one’s moral character is fixed by one’s twenties can be so strong that folks don’t always believe what the empirical research says about the question.

So, as a thought experiment, let’s entertain the hypothesis that, by your twenties, your moral character is fixed — that you’re either ethical or evil by then and there’s nothing further ethics instruction can do about it. If this were the case, how would we expect scientists to respond to other scientists or scientific trainees who behave unethically?

Presumably, scientists would want the unethical members of the tribe of science identified and removed, permanently. Under the fixed-character hypothesis, the removal would have to be permanent, because there would be every reason to expect the person who behaved unethically to behave unethically again.

If we took this seriously, that would mean every college student who ever cheated on a quiz or made up data for a lab report should be barred from entry to the scientific community, and that every grown-up scientist caught committing scientific misconduct — or any ethical lapse, even those falling well short of fabrication, falsification, or plagiarism — would be excommunicated from the tribe of science forever.

That just doesn’t happen. Even Office of Research Integrity findings of scientific misconduct don’t typically lead to lifetime disbarment from federal research funding. Instead, they usually lead to administrative actions imposed for a finite duration, on the order of years, not decades.

And, I don’t think the failure to impose a policy of “one strike, you’re out” for those who behave unethically is because members of the tribe of science are being held back by some naïvely optimistic outside force (like the government, or the taxpaying public, or ethics professors). Nor is it because scientists believe it’s OK to lie, cheat, and steal in one’s scientific practice; there is general agreement that scientific misconduct damages the shared body of knowledge scientists are working to build.

When dealing with members of their community who have behaved unethically, scientists usually behave as if there is a meaningful difference between a first offense and a pattern of repeated offenses. This wouldn’t make sense if scientists were truly committed to the fixed-character hypothesis.

On the other hand, it fits pretty well with the hypothesis that people may be able to learn from their mistakes — to be rehabilitated rather than simply removed from the community.

There are surely some hard cases that the tribe of science view as utterly irredeemable, but graduate students or early career scientists whose unethical behavior is caught early are treated by many as probably redeemable.

How to successfully rehabilitate a scientist who has behaved unethically is a tricky question, and not one scientists seem inclined to speak about much. Actions by universities, funding agencies, or governmental entities like the Office of Research Integrity are part of the punishment landscape, but punishment is not the same thing as rehabilitation. Meanwhile, it’s unclear whether individual actions to address wrongdoing are effective at heading off future unethical behavior.

If it takes a village to raise a scientist, it may take concerted efforts at the level of scientific communities to rehabilitate scientists who have strayed from the path of ethical practice. We’ll discuss some of the challenges with that in the next post.

*STEM stands for science, technology, engineering, and mathematics.

Janet D. Stemwedel About the Author: Janet D. Stemwedel is an Associate Professor of Philosophy at San José State University. Her explorations of ethics, scientific knowledge-building, and how they are intertwined are informed by her misspent scientific youth as a physical chemist. Follow on Twitter @docfreeride.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 2 Comments

Add Comment
  1. 1. Jerzy v. 3.0. 5:32 am 05/30/2014

    I would be more interested in making scientific environment not conductive to cheating, than discussing morality of scientists.

    Anyway, crime is fought by locks and policemen, not moral investigations on possible thieves.

    And I feel that science is so complicated that ethics often involves balancing of probabilities. For example, correct estimation of risk versus benefit of drug candidate. Or more general, a chance that any experiment has unintended effects.

    Link to this
  2. 2. ThomasB 5:24 pm 05/30/2014

    From what I have read, psychologists have pretty much dropped the idea that a person is good or bad. Instead they talk about behavior and behavior modification.

    I am inclined to agree with this approach. I think that there are very few people who do not behave selfishly at times. There is a freeway near my home where people usually drive well over the speed limit, which increases the danger to other drivers. But unethical behavior also can be controlled: when there is a police car driving at the speed limit everyone else slows down as well.

    This seems to be primary way to control bad behavior which is motivated by somewhat rational things such as personal gain (as opposed to highly emotional misbehavior such as physical assault): thinking that punishment is highly likely and worse than the benefit makes for ethical behavior.

    I think the issue in science is that some people think that no one will notice small ethical lapses (or in some cases large lapses). Be lazy about citing all the references, minimize the uncertainties in the data, etc. Who will notice or really care? Small lapses like this I think should be treated like traffic tickets: the embarrassment of knowing that we were caught and enough punishment to be a reminder. Excessive punishment apparently is more likely to cause defensiveness and self-justification rather then correcting behavior.

    Major misbehavior, such as completely fabricating a study, may be another matter. This sort of thing is not just a momentary slip in judgement, it is something that happened over an extended period of time and shows a pretty contemptuous attitude towards the people who will be reading it. The question here is whether the scientific community involved has the time and ability to correct the behavior or whether it should simply protect itself by banning the person responsible.

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Scientific American Holiday Sale

Scientific American Mind Digital

Get 6 bi-monthly digital issues
+ 1yr of archive access for just $9.99

Hurry this offer ends soon! >


Email this Article