Skip to main content

Homo (Sans) Sapiens: Is Dumb and Dumber Our Evolutionary Destiny?

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


James R. Flynn's observation that IQ scores experienced dramatic gains from generation to generation throughout the 20th century has been cited so often, even in popular media, that it is becoming a cocktail party talking point. Next stop a New Yorker cartoon. (An article about Flynn and the Flynn effect has already been published in The New Yorker.)

A recent report in Trends in Genetics (part 1 and part 2) takes a bleaker view of our cognitive future—one that foresees the trend line proceeding inexorably downward. Gerald Crabtree, a biologist at Stanford University, has put forward a provocative hypothesis that our cushy modern existence—absent the ceaseless pressures of natural selection experienced during the Paleolithic—makes us susceptible to the slow creep of random genetic mutations in the the 2000 to 5000 genes needed to ensure that our intellectual and emotional makeup remains intact. The implications of this argument are that we as a species of the genus Homo are over many generations slowly losing our sapiens.

The press justifiably had a field day with this one:


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Why did Petraeus do it? Maybe humans are evolving to be dumber.

We're getting, like, dumber

Homo уже не тот sapiens

The really clever part of Crabtree's argument rests on the contention that a Stone Age Fred Flintstone may have been more of a dynamo in some ways than a 20th century Albert Einstein—our pre-historic forebears performed the evolutionary heavy lifting that led to the swollen heads that we still avail ourselves of, at least until the inevitable decline predicted by Crabtree sets in.

Expansion of the human frontal cortex and endocranial volume, to which we likely owe our capacity for abstract thought, predominately occurred between 50 000 and 500,000 years ago in our prehistoric African ancestors, well before written language and before we had the modern voice-box to produce sophisticated verbal language, but after the first tools. Thus, the selective pressures that gave us our mental characteristics operated among nonverbal hunter-gatherers living in dispersed bands or villages, nothing like our present-day high-density, supportive societies.

In line with Crabtree's take, the transition to survival through wiles—in place of speed and physical strength—required adaptations that appeared to rival or outpace the most lofty contemporary intellectual achievements like writing a symphony or cogitating on higher math. One small error in gauging the aerodynamics and gyroscopic stabilization of a spear and one of our would-be ancestors became a canape for a saber-tooth tiger.

Many kinds of modern refined intellectual activity (by which our children are judged) may not necessarily require more innovation, synthesis, or creativity than more ancient forms: inventing the bow-and-arrow, which seems to have occurred only once about 40,000 years ago, was probably as complex an intellectual task as inventing language. Selection could easily have operated on common (but computationally complex) tasks such as building a shelter, and then computationally simple tasks, such as playing chess, became possible as a collateral effect.

Flintstone vs. Einstein smarts are contrasted most starkly by considering the case of artificial intelligence. AI has achieved major strides in emulating certain aspects of intellect: playing chess or Jeopardy and finding patterns in large collections of data, a field dubbed "deep learning." But the remaining and still immense challenges AI confronts lie elsewhere, as Crabtree points out:

AI promised household robots that would wash dishes, mow the lawn, and bring us freshly cooked croissants and coffee in the morning. Needless to say we do not have these robots now and none of the readers of this piece will probably ever see them, despite the immense financial impetus to build them.

The things at which AI excels—playing chess, Jeopardy or keeping an airplane on course—are, in fact, a cognitive piece of cake compared to washing dishes and putting them away in the right place. I remember roboticist Rodney Brooks demonstrating this during a talk at MIT in which he simply put his hand in his pocket and pulled out some change, an extraordinarily tough task for the current generation of R2-D2s.

Without the rigors of strong selection in our extended urban conglomerates—no more necessity of getting it right the first time on that spear throw—the slow but relentless decline of those 2,000 to 5,000 cognition-related genes has already begun—as this argument goes. Crabtree begins the first part of his essay by asserting that the average citizen from Athens circa 1,000 B.C.—or anyone from Africa, India, Asia or the Americas millenia back—would be among the "brightest and most intellectually alive of our colleagues and companions, with a good memory, a broad range of ideas, and a clear-sighted view of important issues"—personal qualities supplemented by an astonishing emotional aplomb. This hyper-fit type would have prevailed even before the rise of civilization:

A hunter–gatherer who did not correctly conceive a solution to providing food or shelter probably died, along with his/her progeny, whereas a modern Wall Street executive that made a similar conceptual mistake would receive a substantial bonus and be a more attractive mate. Clearly, extreme selection is a thing of the past.

Maybe this explains our fascination with post-apocalyptic Mad Max-style fantasies? But where's the proof for Crabtree's musings and what about contradictory evidence? Crabtree proposes a test of his hypothesis and he also dismisses the Flynn effect that suggests that we have been getting progressively smarter generation after generation. Better IQ scores, Crabtree posits, are not a result of natural selection, but rather may have resulted from getting rid of lead and other heavy metals from gasoline and paint, from elimination of hypothyroidism by putting iodine in salt and from learning how to take tests better. Notwithstanding the Flynn effect, our slow genetic decline continues apace.

And what does the author of the Flynn effect think about the Crabtree effect?

Crabtree suggests that our genetic IQ is in decline and proposes a direct genetic test of his hypothesis. We should await the results without sharing his pessimism. As he says, the environment that pressures us to perform intellectually is competition with other people, which could be argued to be at its maximum today. He "fears" it is not enough and that is not a solid foundation foundation for his speculations. A much more direct test of trends is reproductive patterns. Only recently have the better educated been out-reproduced by the less educated. One can imagine events that would reverse this, so it is premature to panic.

Meanwhile, noted British anthropologist and evolutionary psychologist Robin Dunbar questions the premises of any postulated slow slide toward imbecility:

Crabtree's argument is built on the assumption that the selection pressure for big brains (aka IQ) was solving instrumental problems (how to survive in the world by building better weapons, better tools, etc). In fact, the selection for larger brains across all mammals and birds (and specifically primates) is the complexities of the social world, and this remains at least as complex as it ever was -- in fact, the social world may have even become more complex than it ever was due to a combination of higher population density and urbanization. The ability to build clever tools or novel hunting techniques appears to be a by-product of the [neural] software needed to handle a complex world (they both use the same logic and cognitive processes). So we haven't in fact lost the selection process that kept the pressure on.

The question, as often happens in evolutionary biology, is how to distinguish assertions like Crabtree's from a Rudyard Kipling "just so" story. Crabtree has thought of a test that would sequence whole genomes of carefully selected individuals—ones whose genes could be traced back through an involved analysis to ancestors who lived at different times during the past 5,000 years when the transition from hunter-gatherer lifestyle to agriculture was taking place. The test would look for an increase in mutations for those individuals with genes linked to ancestors who lived more recently along the 5000-year continuum, a confirmation of the gradual decline in intellectual ability.

Finding these people might require more than posting notices on Twitter and Facebook and Crabtree's proposal is probably not going to get pegged high on the NIH's funding priorities for the 2014 federal fiscal year. But the Stanford professor still does not despair. He ends this entropic projection of our evolutionary future on an optimistic and self-deprecatory note. Science, he says, may yet find a way to counteract this trend. "One does not need to imagine a day when we could no longer comprehend the problem, or counteract the slow decay in the genes underlying our intellectual fitness, or have visions of the world population docilely watching reruns on televisions they can no longer build."

We may still have a few hundred years before we lose the modifier denoting "intelligent" in Homo sapiens and science may yet find a way to save us "by socially and morally acceptable means. In the meantime [Crabtree volunteers], I'm going to have another beer and watch my favorite rerun of Miami CSI (if I can figure out how to work the remote control)."

 

Image Source: Nevit Dilman

 

 

 

Gary Stix, Scientific American's neuroscience and psychology editor, commissions, edits and reports on emerging advances and technologies that have propelled brain science to the forefront of the biological sciences. Developments chronicled in dozens of cover stories, feature articles and news stories, document groundbreaking neuroimaging techniques that reveal what happens in the brain while you are immersed in thought; the arrival of brain implants that alleviate mood disorders like depression; lab-made brains; psychological resilience; meditation; the intricacies of sleep; the new era for psychedelic drugs and artificial intelligence and growing insights leading to an understanding of our conscious selves. Before taking over the neuroscience beat, Stix, as Scientific American's special projects editor, oversaw the magazine's annual single-topic special issues, conceiving of and producing issues on Einstein, Darwin, climate change, nanotechnology and the nature of time. The issue he edited on time won a National Magazine Award. Besides mind and brain coverage, Stix has edited or written cover stories on Wall Street quants, building the world's tallest building, Olympic training methods, molecular electronics, what makes us human and the things you should and should not eat. Stix started a monthly column, Working Knowledge, that gave the reader a peek at the design and function of common technologies, from polygraph machines to Velcro. It eventually became the magazine's Graphic Science column. He also initiated a column on patents and intellectual property and another on the genesis of the ingenious ideas underlying new technologies in fields like electronics and biotechnology. Stix is the author with his wife, Miriam Lacob, of a technology primer called Who Gives a Gigabyte: A Survival Guide to the Technologically Perplexed (John Wiley & Sons, 1999).

More by Gary Stix