Skip to main content

"Moral Enhancement" Is Science Fiction, not Science Fact

The idea that a pill could make us more ethical is tantalizing, but very likely wrong

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Many people got very excited when the first studies on the hormone oxytocin came out; it is not only easy to use and administer, it also affects social behavior in profound ways, so much so, that it was dubbed the “moral molecule.” The reason: a 2005 study in Nature reported that increasing levels of oxytocin in the brain increases trust and social cooperation.

These effects led some people to consider whether bio-medical interventions might actually do what could not be done before—quickly and efficiently make humans more moral. Indeed, with so many troubles caused by immoral behavior, especially with populations such as psychopaths, the appeal of a neuroscientific “fix” was immense. Other drugs, such as stimulants, have also been considered to further this aim and so have brain stimulation devices, including transcranial direct current stimulation and deep brain stimulation.

By itself, the idea of improving morality is based on laudable intentions. However, there are two major questions that need to be addressed before it becomes feasible: are the biomedical means currently at our disposal more effective than traditional methods and are the aims clear or misguided?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Let’s start with the means. Even though many of the proposed interventions could in fact influence morally relevant behavior, the effects are either not real enhancements, or worse, they are clearly negative in terms of behavior or side effects, such as addiction.I will stick with the example of oxytocin to illustrate this point. The increase in trust and cooperation oxytocin might convey is limited to members of one’s own “tribe.” Modern societies have more than one ethnic, religious and racial group, however, so it is very troubling to realize that oxytocin actually decreases cooperation with outsiders. Even worse, it selectively promotes ethnocentrism, favoritism, parochialism, and even pre-emptive aggression towards those that are different. In a study I led together with my colleague Eric Racine, published recently in the journal Bioethics, we found that negative effects outweigh the positive effects in all of the proposed interventions, whether they were drugs or brain-stimulation devices.

But what about the aims? Even if the means are not yet there, if the aims are good, scientific progress might provide the means in due time. Well, there is a problem there as well. My past and ongoing research shows that moral judgment is based on a balance of specific intuitive evaluations: in any given situation, we quickly evaluate the person involved, the actions they are trying to accomplish and the consequences this would bring to others. The possibility of modulating or enhancing any specific part of the moral judgment process might result in unbalanced moral intuitions.

Consider a couple of examples. We generally agree that some actions, such as lying, are wrong, that some historical people, such as Jesus, were virtuous and that some outcomes, such as saving lives, are good. But if any of the component parts of moral judgment is enhanced so as to dominate others, it might produce unbalanced or even unhinged moral judgments. So, if we used neuroscientific means to enhance our evaluation of a particular action, we might end up with a judgment that ignores other relevant moral aspects, and conclude that lying to a serial killer about the whereabouts of his intended victim would be wrong. Or if evaluation of consequences is enhanced (which some brain stimulation devices might do), this might produce a judgment that cutting up the kidneys, lungs, heart and liver from an unsuspecting healthy passerby to save five people would be good. Or if our evaluation of the person is enhanced, it might lead to a “cult of personality” result—for example, denying that anyone who is married is truly virtuous, since Jesus apparently never married.

In fact, there is evidence to suggest that this is where things go wrong with psychopaths—their judgment is dominated by evaluation of consequences, which leads to inferences that are abhorrent to normal people, that is, those who have the balance of intuitions intact. This all leads to a conclusion that ‘moral enhancement’ is science fiction and not science fact—indeed the use of biomedical means in modulating morality might be dangerous.

Since they are blunt instruments, their use is akin to tuning a clockwork mechanism with a hammer. All in all, traditional interventions seem to have much better chances than drugs and brain stimulation devices: the dangers of the latter two are considerable, and they are likely to cause more harm than good because they endanger the balance of moral intuitions.