In the latest issue of the New York Review of Books, Freeman Dyson has a nice review of Mario Livio's readable book on scientific blunders committed by great scientists. The book is important reading for anyone who wants to understand the true history of science as a process of fits, starts, blind alleys, occasional great successes and of course, many blunders. Livio focuses on five famous scientists - Charles Darwin, Lord Kelvin, Linus Pauling, Fred Hoyle and Albert Einstein - who committed important mistakes. These mistakes sometimes set the field back but they also inspired other scientists to keep on looking and discovering new things. Scientists often build their theories and discoveries on the backs of other failed theories and discoveries. Just as respectable civilizations are often built on the bones of dead ones, respectable science is often built on the bones of scientific failures. And just like the natives are forgotten long after the settlers are celebrated, scientific failures get ignored at the expense of successes even when they are important in explaining the very existence of the successes.
Each of the blunderers in Livio's story blundered in a different manner. Darwin came up with a wrong theory of blending inheritance that he himself realized was acutely lacking in explaining real-world data. Mendel then discovered the right rules for inheritance and initiated a bonafide revolution in science. As Dyson explains, Mendel could improve on Darwin in no small part because he understood statistics and the law of averages better than the self-professed mathematically deficient Darwin. Lord Kelvin made his big blunder when he came up with wrong - and short - ages for both the sun and earth and thereby set up a significant obstacle to Darwin's theory of natural selection which demanded huge tracts of geological time to have passed for the evolution of species. The biological evidence was too overwhelming for Darwin to admit defeat but he clearly could not answer Kelvin's challenge. It was only in the middle part of the twentieth century when the fission and fusion processes powering radioactivity and the sun were worked out that Kelvin's question was posthumously addressed.
Fred Hoyle committed his major blunder and held on to a wrongheaded theory of the origin of the universe until his death. An early reason for Hoyle's recalcitrance in accepting the Big Bang was what he thought was the sheer audacity and fantasy of the theory, with the whole universe seemingly being conjured up from nothing in a flash. This aspect of Hoyle's thinking reminds me of Arthur Eddington's failure to take Subrahmanyan Chandrasekhar's theory of gravitational collapse seriously because he was convinced that there must be a law of nature preventing such a collapse. But the laws of nature are immune to our wishful thinking. The question of what came before the universe is still something that we grapple with, but every important discovery since the 1964 discovery of the cosmic microwave background has validated the Big Bang theory. Hoyle was certainly brilliant enough to have understood this evidence and he demonstrated his great scientific talents when he co-authored a seminal paper on nucleosynthesis with three other scientists. Hoyle thus stands as a curious example of someone who was in equal parts a reactionary and a maverick, not afraid to speculate on everything from extraterrestrial life to artificial intelligence but somehow never warming up to a revolutionary theory of the universe, even when it was supported by copious evidence.
Linus Pauling's mistake was of a different kind and rather hard to understand since it showed an embarrassing lack of knowledge of fundamental chemistry. Coming from someone widely considered to be the greatest chemist of the century this was odd, to say the least. After publishing his groundbreaking papers on the structure of proteins Pauling turned toward DNA and got embroiled in a race to decipher the structure of this all-important molecule with James Watson and Francis Crick. Although the race was perceived much more as such by the duo, Pauling certainly understood the importance of the problem. And then he famously committed an elementary chemical mistake. He published a paper in which the phosphates in DNA pointed inward and were held together by hydrogen bonds. Any good college chemistry student would know that at the pH inside the body (7.4) such hydrogen bonds would not exist and the oxygen atoms would be negatively charged, making them more likely to point outward into the ionic embrace of water. In his memorable book "The Double Helix", James Watson points out how his jaw dropped when he saw the mistake Pauling had made; ironically it was by consulting Pauling's classic "College Chemistry" textbook that he and Crick confirmed the error. As Watson put it, a graduate student under Pauling who made the same mistake would have probably been considered persona non grata at Caltech.
Why did the greatest chemist of the twentieth century miss such an elementary chemical fact about DNA? Even today the reasons are not completely clear. One reason could be that by the early 50s Pauling was much more concerned with nuclear disarmament than serious science, although he kept on publishing prolifically until his death. He could simply have been distracted from pursuing the DNA structure with the kind of full-time zeal that Watson and Crick did. The other reason is that he just missed the obvious. While this may sound surprising, it's a mistake that famous scientists who think out of the box can sometimes make. When it came to cracking the structure of proteins Pauling used a brilliant counter-intuitive approach. When it came to DNA the solution demanded a much more commonsense approach, and Pauling might have been still bogged down in protein structure for his mind to shift to this new kind of thinking. The last possible reason is also the most mundane; Pauling lacked the kind of high-quality structural data from x-ray diffraction that Watson and Crick got (some would say pilfered) from the technically accomplished Rosalind Franklin. When Watson saw the x-ray photographs he recalls feeling his pulse race, convinced that he had clinched it. Sometimes good data is all that separates a brilliant blunder from brilliant glory.
And then there's Albert Einstein whose brilliant blunder seems to indicate a lack of courage rather than a lack of scientific expertise. A lack of courage is another reason why scientists sometimes make important mistakes. In Einstein's case it was his injection of a fudge factor, the cosmological constant, to keep the universe static. Alexander Friedmann and Georges Lemaitre on the other hand had the courage to explore the logical solutions of Einstein's field equations, many of which pointed to an expanding and non-static universe. Einstein who had been a bold revolutionary when he came up with relativity turned out to be a conservative when clearly stating the possible consequences of relativity for the entire universe. In one sense this could be seen as the beginning of Einstein's reactionary streak, marking the time when he started opposing quantum mechanics and the picture of reality it presented. The ultimate irony of the fudge factor, as is now well known, is that it was resurrected by the discovery of the accelerating expansion of the universe and the postulation of dark energy.
As Dyson says, mistakes in science are essential, especially when you are exploring a new field on the cutting edge. No human mind is so all-knowing and perfect that it can cut through the fog of uncertainty and blunder to the solid heart of reality in one fell stroke. Especially at the beginning of a novel direction of research scientists should be liberally allowed to make mistakes. At the end of his review Dyson talks about a blunder he himself made pertaining to the incorrect prediction of the non-existence of charged weak bosons. He will probably agree that he has been so successful in science because he was allowed to make mistakes. Part of making mistakes is simply being able to generate lots of ideas; as one of the blunderers in Livio's book, Linus Pauling, put it, in order to have good ideas one must first have lots of ideas and then throw the bad ones away.
One of the most troubling casualties of the current climate of reduced science funding and flagging interest in science is that young scientists are afraid to make mistakes and therefore to generate lots of ideas. Funding agencies give them only a limited amount of money and ask them to work on "safe" problems; these are both constraints that reduce their appetite for risk-taking. Risk-taking has been one of the most important ingredients in the success of the United States as a leading scientific and technological power. Making mistakes is important not only in science but in business; think of how many computer, aircraft or skyscraper models were tried, tested and discarded before entrepreneurs came up with the correct ones. And it's a process that continues unabated. Once you ask a scientist to stop making mistakes you stop him or her from discovering. The stories of the scientists highlighted by Dyson and Livio as well as countless other episodes from the history of science make this fact clear. We ignore it at the risk of weakening the entire scientific enterprise.