March 16, 2014 | 10
Every now and then, in the course of a broader discussion, some philosopher will make a claim that is rightly disputed by non-philosophers. Generally, this is no big deal — philosophers have just as much capacity to be wrong as other humans. But sometimes, the philosopher’s claim, delivered with an air of authority, is not only a problem in itself but also manages to convey a wrong impression about the relation between the philosophers and non-philosophers sharing a world.
I’m going to examine the general form of one such ethical claim. If you’re interested in the specific claim, you’re invited to follow the links above. We will not be discussing the specific claim here, nor the larger debate of which it is a part.
Claim: To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences.
Let’s break that down.
“Doing X has significant ethical consequences” suggests a consequentialist view of ethics, in which doing the right thing is a matter of making sure the net good consequences (for everyone affected, whether you describe them in terms of “happiness” or something else) outweigh the net bad consequences.
To say that doing X has significant ethical consequences is then to assert that (at least in the circumstances) doing X will make a significant contribution to the happiness or unhappiness being weighed.
In the original claim, the suggestion is that the contribution of doing X to the balance of good and bad consequences is negative (or perhaps that it is negative in many circumstances), and that on this account it ought to be a “difficult and emotional step”. But does this requirement make sense?
In the circumstances in which doing X shifts the balance of good and bad consequences to a net negative, the consequentialist will say you shouldn’t do X — and this will be true regardless of your emotions. Feeling negative emotions as you are deciding to do X will add more negative consequences, but they are not necessary: a calculation of the consequences of doing X versus not doing X will still rule out doing X as an ethical option even if you have no emotions associated with it at all.
On the other hand, in the circumstances in which doing X shifts the balance of good and bad consequences to a net positive, the consequentialist will say you should do X — again, regardless of your emotions. Here, feeling negative emotions as you are deciding to do X will add more negative consequences. If these negative emotions are strong enough, they run the risk of reducing the net positive consequences — which makes the claim that one should feel negative emotions (pretty clearly implied in the assertion that the decision to do X should be difficult) a weird claim, since these negative emotions would serve only to reduce the net good consequences of doing something that produces net good consequences in the circumstances.
By the way, this also suggests, perhaps perversely, a way that strong emotions could become a problem in circumstances in which doing X would otherwise clearly bring more negative consequences than positive ones: if the person contemplating doing X were to get a lot of happiness from doing X.
Now, maybe the idea is supposed to be that negative feelings associated with the prospect of doing X are supposed to be a brake if doing X frequently leads to more bad consequences than good ones. But I think we have to recognize feelings as consequences — as something that we need to take into account in the consequentialist calculus with which we evaluate whether doing X here is ethical or not. And that makes the claim that the feelings ought always to be negative, regardless of other features of the situation that make doing X the right thing, puzzling.
You could avoid worries about weighing feelings as consequences by shifting from a consequentialist ethical framework to something else, but I don’t think that’s going to be much help here.
Kantian ethics, for example, won’t pin the ethics of doing X to the net consequences, but instead it will come down to something like whether it is your duty to do X (where your duty is to respect the rational capacity in yourself and in others, to treat people as ends in themselves rather than as mere means). Your feelings are no part of what a Kantian would consider in judging whether your action is ethical or not. Indeed, Kantians stress that ethical acts are motivated by recognizing your duty precisely because feelings can be a distraction from behaving as we should.
Virtue ethicists, on the other hand, do talk about the agent’s feelings as ethically relevant. Virtuous people take pleasure in doing the right things and feel pain at the prospect of doing the wrong thing. However, if doing X is right under the circumstances, the virtuous people will feel good about doing X, not conflicted about it — so the claim that doing X should always be difficult and emotional doesn’t make much sense here. Moreover, virtue ethicists describe the process of becoming virtuous as one where behaving in virtuous ways usually precedes developing emotional dispositions to feel pleasure from acting virtuously.
Long story short, it’s hard to make sense of the claim “To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences” — unless really what is being claimed it that doing X is always unethical and you should always feel bad for doing X. If that’s the claim, though, emotions are pretty secondary.
But beyond the incoherence of the claim, here’s what really bugs me about it: It seems to assert that ethicists (and philosophers more generally) are in the business of telling people how to feel. That, my friends, is nonsense. Indeed, I’m on record prioritizing changes in unethical behavior over any interference with what’s in people’s hearts. How we behave, after all, has much more impact on our success in sharing a world with each other than how we feel.
This is not to say that I don’t recognize a likely connection between what’s in people’s hearts and how they behave. For example, I’m willing to bet that improvements in our capacity for empathy would likely lead to more ethical behavior.
But it’s hard to see as empathetic telling people they should generally feel bad for making a choice which under the circumstances is an ethical choice. If anything, requiring such negative emotions is a failure of empathy, and punitive to boot.
Clearly, there exist ethicists and philosophers who operate this way, but many of us try to do better. Indeed, it’s reasonable for you all to expect and demand that we do better.
Secrets of the Universe: Past, Present, FutureX