Believe it or not, I do have friends who would describe themselves as not liking math, and every so often one of them will share this meme on Facebook: And then Satan said, “Put the alphabet in math.” There are different background pictures each time the meme pops up, but the text is always the same. Some people feel like abstract notation was where math classes stopped making sense to them.

But what if we didn’t have that notation? In my math history class this semester, we got to look at what math was like before we (Satan?) put the alphabet in it. Specifically, we completed the square with al-Khwarizmi, the Persian mathematician who wrote the textbook about solving linear and quadratic equations that gave us the word “algebra.” He did not use symbols to represent unknown quantities, so his book contains instructions like this:

“A square and 10 roots are equal to 39 units. The question therefore in this type of equation is about as follows: what is the square which combined with ten of its roots will give a sum total of 39? The manner of solving this type of equation is to take one-half of the roots just mentioned. Now the roots in the problem before us are 10. Therefore take 5, which multiplied by itself gives 25, an amount which you add to 39 giving 64. Having taken then the square root of this which is 8, subtract from it half the roots, 5 leaving 3. The number three therefore represents one root of this square, which itself, of course is 9. Nine therefore gives the square.”

Today, we would translate that paragraph into something like this:

x^{2}+10x=39. Find x^{2}.

x^{2}+10x+5^{2}=39+5^{2}

(x+5)^{2}=64

x+5=8

x=3

x^{2}=9

I think most of us, including my math-phobic Facebook friends, would find the second version easier to follow and understand. As I told my class, the only thing worse than the alphabet in math is not having the alphabet in math.

This week, I am attending the Heidelberg Laureate Forum, a math and computer science analogue of the Lindau Laureate forum for Nobel Prize winners, and writing for the forum blog. On Tuesday, Leslie Lamport, who won computer science’s prestigious Turing prize in 2013, gave a talk called “How to write a 21st century proof” that started with the same observation my history class and I made: formulas are easier to read and parse than prose equations, so we have moved beyond prose equations when we write about math. So why do mathematicians insist on writing proofs in prose, the same way 17th century mathematicians did?

When I saw the title of the talk, I assumed Lamport would be talking about computer proof checking programs or even a proof creating program like the one Timothy Gowers has worked on and written about. But Lamport’s advice was much more down-to-earth. He really was describing a way for mathematicians to write proofs that are easier to read and harder to get wrong, substitutes for long prose proofs. "When you write proofs, you're trying to do two things," Lamport said later in a meeting with the press. "On the one hand, you want to show that something is beautiful, but on the other hand you’re trying to show it’s true. Truth may be beauty and beauty truth, but you don’t demonstrate them the same way."

His method, which you can read about in more detail on his website (pdf), is a hierarchical structure that doesn’t seem entirely dissimilar from the two-column proofs that most of us learned in middle school or high school geometry class, although he points out that it can handle complex problems that would be unwieldy in that two-column format. Each line is numbered, and each assertion is justified with numbers referring to previous lines and assertions.

Lamport quoted Michael Spivak’s calculus textbook: “…precision and rigor are neither deterrents to intuition, nor ends in themselves, but the natural medium in which to formulate and think about mathematical questions.” He then walked us through a proof of a corollary to the mean value theorem from that textbook, pointing out statements that he felt were not precise or rigorous enough and rewriting the proof using his technique.

Lamport says that the benefit of his technique is not only, or even primarily, in its value for readers. It’s also a much better way to find errors in your own work and prevent them from making it into published work. Although he admits that the exact overall figure is not known, he says that in one small study, 1/3 of published, peer reviewed math papers contained a false theorem. Some of them were false because the proofs were wrong, and some were false because they relied on wrong proofs. Regardless of the reasons, false theorems should not make it to publication, and Lamport says his proof structure keeps us from letting sloppy thinking invade our writing. (Marcus Pössel, a co-blogger for the Heidelberg Laureate Forum, also wrote about Lamport's talk and expands on some of his ideas about clear writing.)

Lamport suggested that if a journal insists on publishing only “17th century” proofs, mathematicians should write them anyway as a way to make sure the proof is watertight. And an author can post the hierarchical proof on his or her website, perhaps using hypertext to allow readers to hide or show the details in each step or to include summaries of the main ideas in each step of the proof.

In the meeting with the press, Lamport talked about another benefit of the technique that he didn’t have time for in the talk. He says that once you have a hierarchical proof, you can use pieces of it in new proofs very easily. For example, if you want to change one assumption slightly or focus on solutions that have different properties, you will be able to tell very easily which parts of the proof can stay the same and which parts will need to change.

The talk generated a lot of questions, and I heard conference attendees chatting about it all day. Not all mathematicians in attendance were thrilled about a computer scientist, even one with a Ph.D. in math and a Turing award, telling them how to write proofs. Some of them felt very strongly that hierarchical proofs would damage the beauty of their work. Some felt that perhaps the technique would work for simple proofs in a pedagogical context but couldn’t possibly work for more complicated proofs in a research context. I think Lamport would argue that if you think you can’t write your proof in this form, you don’t understand the proof well enough.

Personally, I found Lamport’s arguments quite compelling. The difference in clarity of algebraic equations explained in words and algebraic equations explained in symbols is like night and day, and if Lamport is right, switching to this proof technique causes a similar jump in clarity. Change is hard, but I am interested in the potential benefits to my teaching, thinking, and communication.

*This blog post **originates** from the official blog of the 2nd **Heidelberg Laureate Forum** (HLF) which takes place in Heidelberg, Germany, September 21 – 26, 2013. 24 Abel, Fields, and Turing Laureates will gather to meet a select group of 200 young researchers.*