Several years ago, I was asked to speak at a convention of high school teachers and their students on the growing importance and character of interdisciplinary science. This horrified me. I have an acquired aversion to all terms where "x" is prefixed to "disciplinary," such as x = "multi-" or x = "inter-." I once tried to short-circuit these banalities with x = "trans-" and M.I.T. had performed a similar experiment with the iconoclastic x = "anti-."
The problem with all of these efforts at syntactic evasion is that each modifier ends up bolstering the very concept and value it seeks to undermine—the all-devouring gravitational attraction and tapering force of the disciplines.
I asked the students who they would select as crew members for a trip to Mars. Without hesitation they answered: astronaut, engineer, doctor, physicist, botanist, geologist, an A.I., and even a politician. Children seem to understand immediately that problems define the necessary breadth of expertise and although their answers were drawn from the professions the crew was admirably diverse.
There is something about space-thinking that allows us to overcome the straitjackets that the academy has designed and that scholars seem to so enjoy wearing—as if the mental straitjacket were the latest word in the world of fashionable ideas.
Fifty years after the moonshot success of Apollo 11, it is time for culture to rescale its ambitions and set out on the ultimate journey—a journey across worlds. Sitting at home straying over the broken links of the Internet, marveling at the pace of life, complaining online about the exigencies of evaporating time—this is not a journey. It is a stupefaction—what the hero Odysseus encountered in the land of Lotus Eaters.
In order to understand what it means to set out across the sea of space we need to learn from those who have journeyed studiously across our own world and beyond, and en route furnished us with a greater understanding of the nature of reality.
People like Neil Armstrong, who saw the life of the mind and of the explorer as entangled, writing: "I am, and ever will be, a white-socks, pocket-protector, nerdy engineer, born under the second law of thermodynamics, steeped in steam tables, in love with free-body diagrams, transformed by Laplace and propelled by compressible flow."
There is a need for new ideas around multiplanetary space life, and perhaps surprisingly, these ideas could provide a practical vehicle for cultivating deep comparative observations—to think like an alien—something required to ensure a more intelligent future for all of life on this planet.
I want to take you on a historical voyage from an Odyssean perspective. Or better still, a travel with Gulliver through the Lilliputian and Brobdingnagian territories of stupidity and intelligence, the twin subjects of my own work. Mythologically, they are the true Cain and Abel of human nature.
Intelligence and stupidity: two facts of life that we need to struggle with if we are to stand any chance of transporting the best of humanity into space rather than our most regrettable attributes and filling the universe with the ignoble gas of idiocy.
Everyone likes to say they study intelligence. There are numerous intelligence institutes and intelligence chairs at universities. But I ask you, which is more dangerous to the planet, intelligence or stupidity? The fact is any number of academics would be happy to be the Intelligent or Intelligence Professor, but I doubt any would be happy to be the Stupid Professor or Professor of Stupidity.
And why not? Other than of course the embarrassing ambiguity of the title. Part of the reason is the dirty secret of human culture and science—that there is no topic about which we have been more stupid than intelligence. And we need to know what went wrong.
Reason, according to Aristotle, was about humans' ability to rein in their passions, i.e., our ability to resist the urge of our instincts. Not math, not physics, not abstract expressionism, not navigating, but self-control.
In the 13th century Thomas Aquinas wrote "the cognitive faculty is not the act of an organ and is not united in any way to the bodily matter: it is the angelic intellect." In the 14th century William of Ockham extended this dualism into an argument for the simplicity of lucid thought, and these ideas were picked up by Descartes in the 17th century when he observed that thought is an existential proof of material existence in his famous dictum, "I think, therefore I am." And intelligence has been thought about largely as a static enterprise, an armchair action, ever since.
To Darwin in the 1850s, who had come to understand the diversity of life by voyaging rather than by speculating in the armchairs of royalty (as Descartes had), reason could be broken down into gradations: "Intelligence is based on how efficient a species became at doing the things they need to survive."
And at this point of progress something goes horribly wrong. Francis Galton—Darwin's cousin—narrows the definition of intelligence severely, going so far as to say that it is ultimately people's ability to gain reputation and success. And this was another way of saying what Galton had already suggested in 1869, when he coined the distinction between "Nature and Nurture."
Inspired by this reduction, Alfred Binet published in 1903 his methods for an "intelligence test," in order to determine how much work of nurturing needed to be done to overcome the limitations of nature. And the application to schools of these ideas of the IQ came quickly; classrooms were altered and standardized to accommodate a false dichotomy.
We now know that it is in our collective intelligence that we excel—when we collaborate. Where humanity seems to differ most from the remainder of animate life—and perhaps exclusively—is in our capability for outsourcing reason to culture and accumulating capability over the amplified scale of many individuals over many generations. Moving Darwinism from the private genome into the public world. And the most elaborate outsourcing of reason is to the computer and AI—our future companions in our spatial voyaging.
As long ago as 1840, Ada Lovelace, daughter of Lord and Lady Byron, scientist born of poets, wrote down for the first time an algorithm for computing the Bernoulli series for Charles Babbage's analytical engine and set in motion our new world order. And this is where our human-machine hybrid mind needs new principles—to work alongside the emerging silicon collective of our creation. And I can think of no more demanding a test of collective intelligence than the challenges, hardships and inspirations of space travel. It was in space after all that humanity first came to work collaboratively with digital computers—the automatic guidance computer of Apollo 11.
Exoplanets, Signals and Space
Circling back to Odysseus—man of action, deft and tactful, the man of twists and turns, and ultimately traveler through the Space-Ocean—the terminal arena within which intelligence and stupidity will lastly battle. Space is the place where humanity will leave its most immense, enduring, and only traces of accomplishment. Because after five billion years, the hydrogen in the center of our Sun will run out, its helium will become compressed, and the sun will slowly expand into a red giant, burning through all of the inner planets, and then the Earth, and then the solar system itself. And not a trace of this planet or any of its poetry, music, architecture, ethics, science, and totality of aspiration will remain—all will become atomized into entropic oblivion.
In Rome's Campo de' Fiori in 1600 the great Odyssean Giordano Bruno was burned at the stake by the Inquisition—in part for his cosmological beliefs. Bruno's cosmology recognized "suns" and planets—"earths" that move around suns and receive light and heat from them. In other words, as he writes himself: "stars are other suns with their own planets" and other worlds "have no less virtue nor a nature different from that of our Earth" and like Earth, "contain animals and inhabitants."
The first scientific detection of one of Bruno's parallel earths, what we call exoplanets (extra-solar planet), was recorded in 1988 and confirmed as late as 2012—over 400 years after Bruno was devoured by the fires of stupidity. As of today there are 4,096 confirmed planets in 3,053 systems, with 664 systems having more than one planet. As far as we know, every star possesses an exoplanet. Bruno's plight reminds me of Ursula K. Le Guin's plangent insight in her novel The Left Hand of Darkness: "One voice, speaking truth is a greater force than fleets and armies, given time; plenty of time."
And the facts of exoplanets make the so-called Fermi Paradox that much more challenging. This idea, named after the Italian particle physicist Enrico Fermi in 1950, asks where the aliens are, given that our star and Earth are part of a young planetary system. There are hundreds of explanations, almost as many as there are exoplanets—from the inevitable demise of all intelligent species experiencing moments of supreme stupidity, through to hiding in the dark matter of the expanding universe.
My own favorite was contributed by my colleagues at the Santa Fe Institute. They showed mathematically, using information theory, that any optimally encoded signal needs to appear, from the perspective of an observer, indistinguishable from noise. So if we live in a universe of intelligent species then we would expect not to detect them.
Fortunately for all of them, humanity has done its best to buck this trend and our own stupid communications will at least provide extra-terrestrial crusaders with ample evidence of our existence. And what will they do when they find us with their vastly superior technology? I always liked Polish novelist Stanislaw Lem's remark in his wonderful novel His Master's Voice: "Ants that encounter in their path a dead philosopher may make good use of him."
Let's do our best in the coming years of our epoch as space Navigators to be more like living philosophers and give ET something to admire.