Welcome to

Mind Matters

where top researchers in neuroscience, psychology, and psychiatry discuss the findings and theories driving their fields. Readers can join them. We hope you will.

This week:

When Morality is Hard to Like

Would it be moral to push this man in front of a train? Depends.
Image copyright iStockPhoto



by David Dobbs
Editor, Mind Matters
One of the juiciest pleasures of recent neuroscience is its exploration of long-mushy questions of ethics and morality. What brain networks generate moral and ethical sentiment and action? Does morality or ethics rise more from recently developed social conventions (human invention, in other words) or from brain-based cognitive systems that first developed in our evolutionary ancestors? What are the neural correlates of moral sensitivity? Of revenge? Of charity? Of a hostility to moral guidelines? Reviewed below is a toothsome addition to this line of inquiry, "Damage to the prefrontal cortex increases utilitarian moral judgements" (Nature, April 19, 2007), in which an illustrious team led by the University of Iowa's Michael Koenigs and the University of Southern California's Antonio Damasio explored the tendency of certain brain-damaged patients to favor utilitarian moral judgments. Do you think it's okay to kill one person to save two or three others? It seems your answer may rely on the relative health or strength of particular brain areas. As noted by our reviewers -- Jorge Moll and Ricardo de Oliveria-Souza, of the Labs D'Or in Rio de Janeiro, and David Pizarro, of Cornell University -- the study illuminates intriguing dynamics in how we balance principle and emotion to calculate The Right Thing to Do.

Cold-blooded Morality?

Jorge Moll & Ricardo de Oliveira-Souza
Cognitive and Behavioral Neuroscience Unit
LABS-D'Or Hospital Network
Rio de Janeiro, Brazil

"Certain aspects of this situation ... seem to call for watchfulness and, if necessary, quick action on the part of the Administration. I believe therefore that it is my duty to bring to your attention the following facts and recommendations." - Albert Einstein to President Franklin Roosevelt, in a letter about the possibility of developing an atomic bomb, August 2, 1939.

Einstein's letter to Franklin Roosevelt encapsulates key aspects of moral judgment: moral sentiment (Einstein's concern about the destiny of human lives on the verge of the World War II), a moral dilemma (his decision about whether to disclose scientific evidence that would ultimately lead to a deadly atomic weapon), and a utilitarian calculus (his judgment about whether more lives would be spared if America eventually built such a weapon). Whether to write this letter must have been a terrible decision to make. Half a century later, cognitive neuroscience is developing the ability to explain the brain mechanisms that underlie such moral judgments. A crucial issue that remains poorly understood, however, is the relationship between moral reasoning and emotion. This new study by Koenigs and colleagues ("Damage to the prefrontal cortex increases utilitarian moral judgments." from Nature, April 19, 2007) provides important new evidence on this question. It shows that damage to the ventromedial prefrontal cortex (VMPFC), the region of the prefrontal cortex (PFC) located above our eye sockets, increases "utilitarian" choices in moral dilemmas -- judgments, that is, that favor the aggregate welfare over the welfare of fewer individuals. The study stirs already hot debates about how we juggle evidence and emotion to produce moral decisions. Rationalizing morality Koenigs and his collaborators compared the performance on moral decision-making tasks of six patients with bilateral VMPFC damage to that of neurologically normal controls and patients with lesions in other brain regions. The test subjects confronted decision-making scenarios in four main classes. One class contained "high conflict" (that is, morally ambiguous) emotionally salient "personal" moral scenarios, such as whether to push a bulky stranger onto the track of a runaway trolley (thus killing the stranger) if doing so would save the lives of five workmen down the line. A second class contained ''low-conflict'' (that is, morally unambiguous) but highly personal scenarios, such as whether it was moral for a man to hire someone to rape his wife so he could later comfort her and win her love again. A third category offered morally ambiguous but relatively a-personal scenarios, such as whether it was okay to lie to a guard and "borrow" a speedboat to warn tourists of a deadly impending storm. A fourth category consisted of ambiguous but non-moral scenarios, such as whether to take a train instead of the bus to arrive somewhere punctually. The VMPFC patients and controls performed alike in the low-conflict personal scenarios, unanimously responding "no" to all the clear-cut, low-ambiguity personal scenarios such as those mentioned above. But when pondering the more emotionally charged high-ambiguity situations, the VMPFC patients were much more likely than others to endorse utilitarian decisions that would lead to greater aggregate welfare. They were far more willing than others were, for instance, to push that fellow passenger in front of the train to save the workmen downtrack. Reason and emotion in moral judgment: friends or foes? Why should VMPFC patients show increased preference for utilitarian choices? It's tempting to attribute this preference to a general emotional blunting -- a feature commonly described in patients with prefrontal damage. Reduced emotion would presumably make these patients more prone to make utilitarian choices. However, an earlier study that Koenigs did with VMPFC patients argues otherwise. In that study, VMPFC patients played the "Ultimatum Game:" A pair of players is offered a sum of money, and Player A proposes some division of the money with Player B; if Player B rejects the proposed division, neither gets any money. For Player B, the utilitarian decision is to accept any proposal, since rejection means no gain at all. But the VMPFC players rejected imbalanced offers more than did controls, apparently because they allowed an insult over the inequitable but profitable proposal to overrule utilitarian reason. A generalized emotional dullness thus seems an unlikely explanation for the VMPFC patients' increased utilitarian choices in this later study. A more parsimonious hypothesis is that the VMPFC is important for a special class of emotions known as prosocial moral sentiments, such as guilt, compassion and empathy . As we describe in a recent paper, these sentiments emerge from a complex integration of emotional states (sadness, fear or fondness, for example) rising from limbic areas with other mechanisms mediated by anterior sectors of the VMPFC, such as prospective thinking and the representation of multiple outcomes. Functional imaging studies support this idea, showing that the VMPFC is engaged not just when we make explicit moral judgments but also when we're passively exposed to non-ambiguous stimuli evocative of moral sentiments, such as a picture of a hungry child or statements describing morally salient situations. (For example, "You had a few drinks at a bar and decided to drive home. You lost control of the car and hit a mother and her baby.") What emerges then is a sort of division of labor. Prosocial sentiments appear to rely heavily on the ventral (or underside) part of the prefrontal cortex, whereas the PFC's dorsolateral areas appear to be more critical for aversive emotional reactions such as anger or frustration. These localizations could explain the otherwise puzzling results from the two Koenigs studies: the VMPFC patients playing the Ultimatum Game let emotion overrule reason because their intact dorsolateral PFC areas produced normal aversive responses such as indignation and contempt; but VMPFC patients were more utilitarian when facing difficult moral dilemmas because the damage to the ventral parts of their PFCs reduced their prosocial sentiments, giving a relative advantage to unsentimental reasoning. Some kinds of juggling are never simple This explanation returns us to Einstein's dilemma. Einstein's letter to Roosevelt helped prepare the United States to build the first atomic bombs. Those bombs killed tens of thousands of civilians -- but in doing so, ended World War II. Was Einstein's utilitarian choice cold-blooded, resulting from emotions being overpowered by pure cognition? We don't think so. Einstein's reason and sentiments must have been working together all along, reflecting fully the interplay of thought, emotion, empathy and foresight  as well as anguish and ambivalence -- that complex moral decisions entail. Jorge Moll is director and Ricardo de Oliveira-Souza a researcher at the Cognitive and Behavioral Neuroscience Unit at Labs D'Or, a research institute in Rio de Janeiro, Brazil, where they investigate the neural underpinnings of social behavior.


The Virtue in Being Morally Wrong

David Pizarro
Psychology Department, Cornell University
Ithaca, N.Y.

"...a Utilitarian may reasonably desire, on Utilitarian principles, that some of his conclusions should be rejected by mankind generally..." - Henry Sidgwick, The Methods of Ethics

It was once obvious to most scholars that the ability to reason was what made us moral creatures. Unlike the lowly animals, we were capable of reasoning our way to a set of moral principles and (sometimes, even) adhering to them. Nonetheless, a few insightful thinkers (such as philosophers David Hume and Adam Smith) argued that it was the "warm" feelings of sympathy and compassion, not the cold rules of logic, that seemed most responsible for our moral sense. A century after psychology's move out of the armchair and into the laboratory, the debate is receiving more attention than ever, as exemplified in the recent paper by Koenigs and colleagues, "Damage to the prefrontal cortex increases utilitarian moral judgements," from Nature. Koenigs shows that patients with damage to the ventromedial prefrontal cortex (VMPFC; an area implicated in, among other things, the proper functioning of "prosocial" emotions) are consistently utilitarian in their moral decisions, nearly always choosing acts that maximize overall benefit. The judgments of these patients, who appear unmoved by the prospect of shoving someone to his death so long as the math works out, look less like the frequent non-utilitarian judgments of normal participants and more like the responses of sociopaths. What about ought? As Moll and de Oliveira-Souza note above, these findings shed light on the relative contribution of reason and emotion in moral judgment. They also have implications for a more controversial question: What should our moral judgments in these scenarios be? Are the normal people in the Koenigs study making the right call by rejecting utilitarianism if the utilitarian option is emotionally daunting? This line of questioning is often brushed aside with a reminder that empirical findings should have no say over questions of ethics; crossing the line between what is and what ought to be is a no-no. But if sociopaths and brain patients make judgments that normal people find morally abhorrent, isn't that good evidence that the normal people are right? Shouldn't we be proud of our non-utilitarian tendencies? This conclusion might hold water if it weren't for the fact that some people other than sociopaths and brain patients also stubbornly endorse utilitarianism. A number of very non-sociopathic, healthy-brained philosophers and social scientists are just as serious about their utilitarianism. For them, the emotions that make us sheepish about acting for the greater good shouldn't play a role in moral judgment at all. Are utilitarians good roommates? So, unlike, say, choosing a basketball team to root for, it's hard to know where to stand on utilitarianism by taking a look at its fans. Does this mean that psychology can contribute nothing to this debate? Perhaps not. Imagine, now that you know the results of the Koenigs paper, that you're in charge of creating a new species of human-like creatures from scratch. Would you strip them of the brain parts and emotional reactions responsible for our non-utilitarian tendencies, ensuring they would have no problem sacrificing a few for the sake of many? Even for utilitarians this notion can be disturbing (as exemplified by Sidgwick's statement). As one of my economist colleagues put it, if you know someone who's perfectly fine with the notion of tossing someone off a bridge (even if it's for the greater good), it's a pretty good bet that he's not the kind of person who is going to win father of the year, donate to charity, or be loyal to his team. Would you really want to grab dinner with such a person? Utilitarianism may, in the end, be the right moral theory. But we want people who are utilitarians not because they're emotionally blunted (such as sociopaths and brain patients), but because they've decided that their warm, tender emotions should be set aside in a few specific cases. Maybe some people are capable of this subtle emotional regulation. But for most of us, being good utilitarians would require sacrificing emotions that, although they might make us morally superior, would also make us jerks. David Pizarro is an assistant professor of psychology at Cornell University, where he studies moral judgment and the influence of emotions on judgment. His current work focuses on how the emotion of disgust shapes moral and political beliefs.