Skip to main content

The Ethics of Paternalism

Should policy makers intervene to make people stop doing things that are bad for them?

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Imagine that you are a policy maker trying to reduce the consumption of soda. Because it’s unhealthy, you’d like to discourage the people in your community from drinking too much of it.

You could put up posters explaining that it’s unhealthy to drink soda, make shops display soda in hard-to-reach places, introduce a soda tax or make it illegal for shops to sell soda. On the other hand, some of your colleagues might be telling you that if people want to drink soda, it’s not your place to stop them.

This scenario highlights a dilemma between intervening for people’s own good and letting people choose freely at the expense of poor outcomes. How much responsibility do policy makers have in making sure people make healthy choices? Behavioral science brings new insights to this long-standing philosophical debate.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Choosing what to do and which approach to take requires making a decision about paternalism, or influencing someone’s behavior for their own good. Every time someone designs policies, products or services, they make a decision about paternalism, whether they are aware of it or not. They will inevitably influence how people behave; there's no such thing as a neutral choice.

Arguments about paternalism have traditionally focused on the extreme ends of the spectrum; you either let people have complete autonomy, or you completely restrict undesirable behaviors. In reality, however, there are many options in between, and there are few guidelines about how one should navigate the complex moral landscape of influence to decide which approach is justified in a given situation.

Traditional economists may argue for more autonomy on the grounds that people will always behave in line with their own best interest. In their view, people have stable preferences and are always weighing the costs and benefits of every option before making decisions. Because they know their preferences better than do others, they should be able to act autonomously to maximize their own positive outcomes.

But we know that’s not what people actually do. The real world is a complicated place to navigate, and humans use heuristics—mental rules of thumb—to get through their days. Unfortunately, these rules of thumb don’t always work optimally; there are times when people are prone to biases and don’t behave in their best long-term interest. This can result in serious detriment to their health, wealth and happiness.

Recognizing the predictable errors that happen when heuristics fail commonly inspires calls for restricting individual choice. Sometimes, these calls even come directly from people who know they will behave in ways that violate their long-term best interest. For example, patients may tell their doctor that they know that they should lose weight and that they intend to make necessary changes to their lifestyle. However, at every appointment, they haven’t done anything to address the problem. Despite their best intention, they’re failing to reach their goal.

The doctor, knowing about common pitfalls and ways to avoid them, could step in and help the patient change their behavior for their own good. But in order to do so, some level of paternalism will be involved; there’s an assumption that the doctor knows best, and that without their intervention the patient will fare worse. In order for the patient to have the best outcome, the doctor can limit their autonomy.

Yet not everyone agrees on how doctors should limit their patients’ autonomy, or even whether they should at all. Historically, a consensus has been hard to find. Behavioral science can shed light on the right path forward in several different ways.

As applications of behavioral science to the design of processes, products and policies become increasingly common, we are discovering how powerful presentation- or effort-based interventions such as nudges and defaults can be.

For example, if soda is displayed in a less visible part of the shop, customers are less likely to buy it. We also know that incentives and penalties do not need to be financial; in the right setting, recognition or praise for performing a desired behavior can be equally or even more effective. With new interventions available, many of which preserve a person’s choice, the black and white decision about whether to be paternalistic becomes a more nuanced question of “how much paternalism is justified in this situation?”

One of behavioral science’s big contributions is in bringing its methodology to domains where evaluation has largely depended on qualitative methods. Using methods like randomized controlled trials, behavioral science provides an experimental approach to understanding the efficacy of interventions. Because each situation is complex, we don’t have guarantees that an intervention will work in a given situation. Testing their efficacy allows us to know that they’re actually providing benefit and not only curtailing autonomy.

In practice, this means that we’ve gotten better at understanding the costs and benefits of policies and interventions. More evidence-driven policymaking is key to deciding how paternalistic we can and should be.

The debate about the justifiability of paternalism to date has been largely philosophical. The same empirical approaches behavioral scientists use to understand the efficacy of interventions can also be used to examine and uncover the factors that make people see paternalism as more or less justifiable. Beyond the benefits of intervening and the costs of taking away autonomy, are there other factors decision-makers should consider?

Our lab’s newest research, which is supported by the Robert Wood Johnson Foundation, suggests that one of the additional factors decision-makers should consider relates to the characteristics of the behavior the intervention encourages. For example, if the behavior is seen as more “sacred,” personal and essential to a person’s sense of self, it will be considered less acceptable to impinge on autonomy.

Finally, we need to take into account that decision-makers have their own biases that should be considered as well. Just like the rest of us, they don’t always perform perfectly rational cost-benefit analyses. However, as they design policies that impact a large number of people, it’s even more important that biases are kept in check.

In industries such as health care and aviation, simple tools such as checklists have provided remarkable improvements. When it comes to policy makers, a similar tool may help encourage a thoughtful approach to a difficult yet necessary ethical consideration.

Paternalism may feel like a thorny topic, but it's also an inevitable one for anyone who designs the products, services and environments that people use. The choices that we make when designing the infrastructure of society are always going to come with inherent biases. Choosing to ignore paternalism doesn't translate into upholding free choice.

Instead, we should focus on using the tools and theories of behavioral science to decide when and how paternalism can actually be used for the greater good. With thought and care, we can develop policies that have just the right amount and kind of paternalism to help ourselves achieve our goals.

About Ingrid M. Paulin

Ingrid Melvaer Paulin is a senior behavioral researcher at the Center for Advanced Hindsight, where she works on applying insights from behavioral economics to the design of products and services that improve people's health, wealth and happiness. Follow her on Twitter @ingridmpaulin.

More by Ingrid M. Paulin

Jenna Clark is a senior behavioral researcher at Duke University's Center for Advanced Hindsight, where she works to help people make healthy decisions in spite of themselves. She's also interested in how technology contributes to our well-being through its effect on our close personal relationships. She holds a PhD in social psychology.

More by Jenna Clark

Julie O'Brien is a behavioral Scientist and Principal at Duke University's Center for Advanced Hindsight and a co-founder of The Behavior Shop. She leads the center's Better Living and Health Initiative, where her team carries out basic and applied research on disease management, medical decision-making and preventive health behaviors. She has a PhD in social psychology and a background in product research. Follow her on Twitter @jdpobrien.

More by Julie O'Brien