Mathematics: learning it, doing it, celebrating it.
Roots of Unity Home

In Praise of Proofs by Contradiction that Aren’t

The views expressed are those of the author and are not necessarily those of Scientific American.

Opposing arrows, a common "punctuation mark" at the end a proof by contradiction.

If you don’t know what to do, do something. That’s one of my mottos when I teach math (and it’s probably good life advice too). Last year, I taught introductory analysis (basically calculus with the juicy bits left in), one of the first proof-oriented classes students take. Writing proofs is hard, and sometimes the hardest part is starting. After you write down the hypothesis, what comes next? What is the something my students should do?

If they don’t have any better ideas, sometimes the best way to start is by contradiction. Proof by contradiction is one of the major proof techniques in mathematics. To prove the statement “A implies B,” a proof by contradiction assumes that both A and “not B” are true, and then shows that this is impossible.

For example, how do you show that there are infinitely many prime numbers? One way to start, often attributed to Euclid, is to assume you have a finite list of all the prime numbers and work until you have a contradiction. Specifically, you multiply all your primes together, add 1, and show that this resulting number is not divisible by any primes on the list. That means your list was incomplete, so there must be infinitely many primes.

Another commonly given example of proof by contradiction is Cantor’s diagonalization argument showing that the set of real numbers is “bigger” than the set of counting numbers. See Vi Hart’s video about different sizes of infinity for a good explanation of diagonalization.

The funny thing is, neither the proof of the infinitude of primes nor the diagonalization proof is really a proof by contradiction. (For an interesting analysis of the widespread myth that Euclid’s proof is a proof by contradiction, see the disappointingly paywalled article Prime Simplicity by Michael Hardy and Catherine Woodgold.) In the case of primes, the proof works perfectly well on any old finite list of prime numbers. You don’t need to assume that the list is complete to run the argument. Similarly, in the case of diagonalization, the proof shows that any function from the counting numbers to the real numbers won’t hit all of the real numbers. We don’t have to assume there is a function that will hit all the real numbers in the first place. The difference is subtle and fun to roll around in your head.

If I didn’t already know how to prove that there are infinitely many prime numbers or that the cardinality of the counting numbers is smaller than the cardinality of the real numbers, I would try contradiction first. It’s a little hard for me to see how to find these proofs without starting from contradiction, even though contradiction is not necessary for either argument.

I often encourage my students to try contradiction if they get stuck on a proof. I think it helps get ideas flowing. Why? I’m not really sure, but maybe it gives us something concrete to hold onto. When we’re first working with a new mathematical idea, it can seem hopelessly abstract. But by trying a proof by contradiction, we have in some sense a tangible “villain” to fight against. Of course, contradiction can’t always help us find a proof, but it’s better to do something, to write down some ideas and whittle away at them until you get to the heart of an argument, than to sit around twiddling your thumbs because you don’t know where to start.

About the Author: Evelyn Lamb is a postdoc at the University of Utah. She writes about mathematics and other cool stuff. Follow on Twitter @evelynjlamb.

The views expressed are those of the author and are not necessarily those of Scientific American.

 Previous: What Is the Goal of a Math History Class? MoreRoots of Unity Next: Look Ma, No Zero!

Rights & Permissions

1. 1. josephabrahamson 11:05 am 08/29/2014

One reason why a proof by contradiction might be fruitful is that in some more refined logics its actually weaker. In intuitionistic logic, you can (trivially) write

A -> not (not A)

but you cannot write

not (not A) -> A

This can be seen as arising “because” inutionistic logic rejects the law of excluded middle, for instance if you assume LEM of A then you can write

LEM a -> not (not A) -> A

or it can also be seen as a statement of the non-constructibility of proof by contradiction. Ultimately, it means that more things are “true” in that you can construct a proof of (not (not A)) than things which are “true” because you can construct a direct proof of A.

In a logic which doesn’t distinguish between constructibility and non-constructibility that ought to suggest there are more avenues to proof if you go via contradiction.

2. 2. josephabrahamson 11:07 am 08/29/2014

Here’s a model of my above statements (implemented in Haskell)

data Void

type Not a = a -> Void

absurd :: Void -> a
absurd = undefined

con :: a -> Not (Not a)
con a f = f a

type LEM a = Either a (Not a)

noc :: LEM a -> Not (Not a) -> a
noc lem f = case lem of
Left a -> a
Right z -> absurd (f z)

3. 3. VectorPosse 11:46 pm 09/15/2014

Proof by contradiction can be easier for two reasons. (1) There are more hypotheses if you get to assume “A” and “not B”. (2) There are some hypotheses that are hard to work with, but their negations are easy. For example, starting from the fact that a number is irrational gives you no information. (All you know is that it can’t be written as m/m.) But starting with a rational number gives you a starting point (because you get to write down and use m/n).

4. 4. VectorPosse 11:49 pm 09/15/2014

One pet peeve, though, is when students do a proof by contrapositive, but frame it as a proof by contradiction. When trying to prove p implies q, students (and some textbooks!) will say, “Assume p and not q.” Then the proof will proceed, but never explicitly use p. The conclusion is “not p” and the “proof by contradiction” is complete because p and “not p” cannot be true simultaneously. But if we never needed p to begin with, why not just prove “not q” implies “not p” to begin with?

You must sign in or register as a ScientificAmerican.com member to submit a comment.