Skip to main content

“What scientific idea is ready for retirement?”

Every year since 1998, Big Questions guru John Brockman has posed one big question on Edge.org and gotten about forty or fifty of the world’s leading thinkers to come up with their own answers.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Every year since 1998, Big Questions guru John Brockman has posed one big question on Edge.org and gotten about forty or fifty of the world's leading thinkers to come up with their own answers. This year the question is "What scientific idea is ready for retirement?". The answers showcase a range of thinking and topics and they make it clear that every thinker interprets the word "retirement" differently. For some retirement means what it does, abolition to the shadows of polite discourse. For many others retirement really means refinement, to abolish not an idea itself but its interpretation. Yet other thinkers are annoyed with semantics rather than the content of the ideas themselves.

Here are a few of my favorites.

David Deutsch argues that the whole notion of "quantum jumps" is a concept way past its utility. Deutsch describes how the behavior of particles like electrons is often interpreted in abrupt, discontinuous terms. The truth though is that when you are observing these particles in phenomena like energy transitions in atoms or quantum tunneling, what you are seeing are continuous, changing probability distributions, not absolute disappearances and appearances.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The truth is that the electron in such situations does not have a single energy, or position, but a range of energies and positions, and the allowed range itself can change with time. If the whole range of energies of a tunneling particle were below that required to surmount the barrier, it would indeed bounce off. And if an electron in an atom really were at a discrete energy level, and nothing intervened to change that, then it would never make a transition to any other energy.

Similarly, Freeman Dyson wants to do away with the whole perception of "wavefunction collapse". He makes the point that probabilities are not real and are actually a measure of our ignorance, so they cannot be treated like physical objects that disappear upon measurement. And asking what happens to a wavefunction after a measurement is irrelevant since a wavefunction does not exist then.

Unfortunately, people writing about quantum mechanics often use the phrase "collapse of the wave-function" to describe what happens when an object is observed. This phrase gives a misleading idea that the wave-function itself is a physical object. A physical object can collapse when it bumps into an obstacle. But a wave-function cannot be a physical object. A wave-function is a description of a probability, and a probability is a statement of ignorance. Ignorance is not a physical object, and neither is a wave-function. When new knowledge displaces ignorance, the wave-function does not collapse; it merely becomes irrelevant.

Other thinkers urge us to break down black and white distinctions. For instance here's Steven Pinker arguing against the classic "behavior=genes + environment" dichotomy; the truth is that each one of them influences the other:

Gene-environment interactions in this technical sense, confusingly, go into the "unique environmental" component, because they are not the same (on average) in siblings growing up in the same family. Just as confusingly, "interactions" in the common-sense sense, namely that a person with a given genotype is predictably affected by the environment, goes into the "heritability" component, because quantitative genetics measures only correlations. This confound is behind the finding that the heritability of intelligence increases, and the effects of shared environment decrease, over a person's lifetime. One explanation is that genes have effects late in life, but another is that people with a given genotype place themselves in environments that indulge their inborn tastes and talents. The "environment" increasingly depends on the genes, rather than being an exogenous cause of behavior.

Martin Rees strikes down the soaring belief that "we will never hit barriers to understanding" in spite of our spectacular current understanding of life and the universe:

There's a widely-held presumption that our insight will deepen indefinitely—that all scientific problems will eventually yield to attack. But I think we may need to abandon this optimism. The human intellect may hit the buffers—even though in most fields of science, there's surely a long way to go before this happens...

We humans haven't changed much since our remote ancestors roamed the African savannah. Our brains evolved to cope with the human-scale environment. So it is surely remarkable that we can make sense of phenomena that confound everyday intuition: in particular, the minuscule atoms we're made of, and the vast cosmos that surrounds us.

Nonetheless—and here I'm sticking my neck out—maybe some aspects of reality are intrinsically beyond us, in that their comprehension would require some post-human intellect—just as Euclidean geometry is beyond non-human primates.

Fiery Cushman tells us to abandon the belief that "big effects have big explanations". He points out that sometimes many small explanations can add up to a big one and sometimes one small explanation engendered by accident can lead to catastrophe. This is true not just of science but of human affairs; for instance the reason conspiracy theories about the JFK assassination will always persist is because many people will simply be unable to accept the fact that a Big Event (JFK's death) had a Small Explanation (Oswald). Similarly, think of the cause of World War 1.

Adrien Kreye argues against Moore's Law and his explanation is similar to Martin Rees's (and flies into the face of "singularitarians" like Ray Kurzweil): the fact that there's been exponential progress in any given field by itself does not mean there will continue to be exponential progress in that field. Dean Ornish points out the shortcomings of large, randomized, controlled clinical trials, arguing that many interesting individual effects are averaged out in such studies. Finally, Peter Woit and Paul Steinhardt argue for the true retirement of two ideas which have not been supported by a shred of experimental support - string theory unification of physics and the multiverse.

Here's my choice for an idea that should be retired: The idea that science is a concept-driven revolution. As with many other thinkers in that list I don't actually think the idea is wrong or unimportant, it's just that it's overrated. New tools are appreciated far less than new ideas, especially outside the sciences. I propose that the idea that science is a tool-driven revolution should be given equal time. This is especially true for chemistry and biology which have been much more experimental compared to physics. Even in psychology - an idea-driven field if there was one - the advent of fMRI has engineered a revolution whose ramifications are still rippling across the landscape of psychological research. As Freeman Dyson says in his book "Imagined Worlds":

‘‘New directions in science are launched by new tools much more often than by new concepts. The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained.’’

Unfortunately the concept-driven revolution idea came from Thomas Kuhn's famous book. Kuhn was a physicist, and his biases dictated his views. If Kuhn had been a chemist his survey of scientific history might have led him to very different conclusions. Fortunately Peter Galison has been the pioneer of the tool-driven revolution paradigm, and his book "Image and Logic" remains the standard torch-bearer for this line of thinking. If we want to see science for what it truly is, we need to recognize the importance of tools as well as ideas.

What scientific idea do readers think is ready for retirement?

Ashutosh Jogalekar is a chemist interested in the history, philosophy and sociology of science. He is fascinated by the logic of scientific discovery and by the interaction of science with public sentiments and policy. He blogs at The Curious Wavefunction and can be reached at curiouswavefunction@gmail.com.

More by Ashutosh Jogalekar