Skip to main content

The Singularity and the Neural Code

Bionic convergence and psychic uploading won’t be possible unless we crack the neural code, science’s hardest problem.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


The following is an edited, updated version of an article originally written for IEEE Spectrum.

I’m 62, with all that entails. Grey hair, trick knee, trickier memory. I still play a mean game of hockey, but entropy looms ever larger. So part of me wants very much to believe that we are rapidly approaching “The Singularity.”

Like heaven, the Singularity comes in many versions, but most involve bionic brain boosting. At first, we'll become cyborgs, as brain chips soup up our perception, memory, and intelligence and eliminate the need for annoying TV remotes. Eventually, we will abandon our flesh-and-blood selves entirely and upload our digitized psyches into computers. We will then dwell happily forever in cyberspace, where, to paraphrase Woody Allen, we'll never need to look for a parking space.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Singularity enthusiasts, or Singularitarians, tend to be computer specialists, such as entrepreneur Ray Kurzweil. Citing the explosive progress of information technologies captured by Moore's Law, Kurzweil prophesies a “merger of biological and nonbiological intelligence” that will culminate in “immortal software-based humans.” The Singularity will supposedly happen not within a millennium, or a century, but decades.

Specialists in real rather than artificial brains find these scenarios laughably naïve, because we are still so far from understanding how brains make minds. “No one has the foggiest notion,” says Nobel laureate Eric Kandel. “At the moment all you can get are informed, intelligent opinions.” Neuroscientists lack an overarching, unifying theory to make sense of their sprawling and disjointed findings, such as Kandel's discovery of the chemical and genetic processes that underpin memory formation--in sea slugs.

The brain is with good reason often called the most complex phenomenon known to science. A typical adult brain contains about 100 billion nerve cells, or neurons. A single neuron can be linked via axons (output wires) and dendrites (input wires) across synapses (gaps between axons and dendrites) to as many as 100 000 other neurons. Crank the numbers and you find that a typical human brain has quadrillions of connections among its neurons.

Adding to the complexity, synaptic connections constantly form, strengthen, weaken, dissolve. Old neurons die and—a growing body of evidence indicates, overturning decades of dogma--new ones are born throughout our lives. Cells can also be retrained for different jobs, switching from facial expressions to finger flexing, or from seeing red to hearing squeaks.

Far from being stamped from a common mold, neurons display an astounding variety of forms and functions. Researchers have discovered scores of distinct types just in the optical system. Neurotransmitters, which carry signals across the synapse between two neurons, also come in many different varieties. Other chemicals, such as neural-growth factors and hormones, also ebb and flow through the brain, modulating cognition in manners subtle and profound.

The more you learn about brains, the more you may wonder how they work--and of course they often don't. They succumb to schizophrenia, bipolar disorder, depression, Alzheimer's disease, and many other disorders that resist explanation and treatment.

Singularitarians nonetheless insist that brains are just complicated computers, and there is some basis for this analogy. Neurons resemble transistors, absorbing, processing, and reemitting the electrochemical pulses known as action potentials. With an amplitude of one-tenth of a volt and a duration of one millisecond, action potentials are remarkably uniform, and they do not dissipate even when zipping down axons a meter long. Also called spikes, a reference to their appearance on oscilloscopes, action potentials serve, supposedly, as the brain's basic units of information.

Assume, as many Singularitarians do, that action potentials are equivalent to operations in a computer. If the brain contains one quadrillion synapses processing on average 10 action potentials per second, then the brain performs 10 quadrillion operations per second, or 10 petaflops. Some supercomputers have already surpassed that processing rate. Hence Singularitarians’ belief that computers will soon leave us in their cognitive dust--unless we embrace them through bionic convergence or uploading.

Barring our admittance to cyber-paradise, however, is the neural code. That phrase refers to the software, or algorithms, that transform action potentials and other physiological processes into perceptions, memories, meanings, intentions.

The neural code is science’s deepest, most consequential problem. If researchers crack the code, they might solve such ancient philosophical conundrums as the mind-body problem and the riddle of free will. A solution to the neural code could also, in principle, give us unlimited power over our brains and hence minds. Science fiction—including mind-control, mind-reading, bionic enhancement and even psychic uploading—could become reality.

But the most profound problem in science is also by far the hardest. Neuroscientists still have no idea what the neural code is. That is not to say they don’t have any candidates. Far from it. Like voters in a U.S. presidential primary, researchers have a surfeit of candidates, each seriously flawed.

The first code was nominated in the 1930’s by British neurobiologist Edgar Adrian. After isolating sensory neurons from frogs and eels, Adrian showed that as the intensity of a stimulus increases, so does a neuron’s firing rate, which can peak as high as 200 spikes per second. In the next few decades, experiments seemed to confirm that the nervous systems of all animals employ this method of conveying information, called a rate code.

But a rate code is a crude, inefficient way to convey information; imagine trying to communicate solely by humming at different pitches. Neuroscientists have long suspected that the brain employs subtler codes. One possibility is temporal coding, in which information is represented not just in a cell's rate of firing but also in the precise timing between spikes.

For example, a rate code would treat the spike sequences 010101 and 100011 as identical because they have the same number of 0’s and 1’s. A temporal code would assign different meanings to the two strings because the bit sequences are different. Temporal coding could boost the brain's information-processing capacity close to the Shannon limit, the theoretical maximum that information theory allows for a given physical system.

Some neuroscientists suspect that temporal codes predominate in the prefrontal cortex and other brain structures associated with “higher” cognitive functions, such as decision-making. In these regions, neurons tend to fire on average only one or two times per second.

On a more macro level, researchers are seeking “population codes” involving the correlated firing of many neurons. The late Gerald Edelman advocated a scheme called neural Darwinism, in which our recognition of, say, an animal emerges from competition between large populations of neurons representing different memories: Dog? Cat? Weasel? Rat? The brain quickly settles on the population that most closely matches the incoming stimulus. Perhaps because Edelman cloaked it in impenetrable jargon, neural Darwinism never caught on.

A population code called synchronous oscillations involves many neurons firing at the same rate and time. In 1990, Francis Crick and Christof Koch proposed that synchronized 40-hertz oscillations play a key role in consciousness. Crick was of course renowned for unraveling the structure of DNA and showing that it mediates an astonishingly simple genetic code governing the heredity of all organisms.

Koch doubts, however, that the neural code “will be anything as simple and as universal as the genetic code.” Neural codes seem to vary in different species, he notes, and even in different sensory modes within the same species. “The code for hearing is not the same as that for smelling,” he explains, ”in part because the phonemes that make up words change within a tiny fraction of a second, while smells wax and wane much more slowly.”

“There may be no universal principle” governing neural-information processing, Koch says, “above and beyond the insight that brains are amazingly adaptive and can extract every bit of information possible, inventing new codes as necessary.” So little is known about how the brain processes information that “it’s difficult to rule out any coding scheme at this time.”

Indeed, Koch helped revive a coding scheme long ago discarded as implausible. This scheme has been disparaged as the “grandmother cell” hypothesis, because in its reductio ad absurdum version it implies that our memory banks dedicate a single neuron to each person, place, or thing that inhabits our thoughts, such as Grandma.

Together with eminent neurosurgeon Itzhak Fried, Koch has identified neurons that respond to images of specific people, from Bill Clinton to Sylvester Stallone. The neurons were discovered in epileptics in whom Fried had implanted electrodes for clinical purposes.

The findings suggest that a single neuron—far from being a simple switch—may possess enormous computational power. Meaningful messages might be conveyed not just by hordes of neurons screaming in unison but by small groups of cells whispering, perhaps in a terse temporal code.

British neurobiologist Steven Rose suspects that the brain processes information at scales both above and below the level of individual neurons and synapses, via genetic, hormonal, and other processes. He therefore challenges a key assumption of Singularitarians, that spikes represent the sum total of the brain's computational output. The brain’s information-processing power may be many orders of magnitude greater than action potentials alone suggest.

Moreover, decoding neural signals from individual brains will always be extraordinarily difficult, Rose argues, because each individual’s brain is unique and ever-changing. To dramatize this point, Rose poses a thought experiment involving a “cerebroscope,” which can record everything that happens in a brain, at micro and macro levels, in real time.

Let's say the cerebroscope records all of Rose's neural activity as he watches a red bus coming down a street. Could the cerebroscope reconstruct what Rose is feeling? No, because his neural response to even that simple stimulus grows out of his brain's entire previous history, including a childhood incident when a bus almost ran him over.

To interpret the neural activity corresponding to any moment, Rose elaborates, scientists would need “access to my entire neural and hormonal life history” as well as to all his corresponding experiences. Scientists would also need detailed knowledge of the changing social context within which Rose has lived; his attitude toward buses would be different if terrorists had recently attacked a bus.

This analysis implies that each individual psyche is fundamentally irreducible, unpredictable, inexplicable. It is certainly not simple enough to be extracted from a brain and transferred to another medium, as Singularitarians assume.

Let's face it. The Singularity is a religious rather than a scientific vision. The science-fiction writer Ken MacLeod has dubbed it “the rapture for nerds,” an allusion to the end-time prophesied in the Bible, when Jesus whisks the faithful to heaven and leaves us sinners behind.

Such yearning for transcendence, whether spiritual or technological, is all too understandable. Both as individuals and as a species, we face deadly serious problems, including terrorism, nuclear proliferation, overpopulation, poverty, famine, environmental degradation, climate change, resource depletion, and epidemics such as AIDS. Engineers and scientists should be helping us face the world's problems and find solutions to them, rather than indulging in escapist, pseudoscientific fantasies like the Singularity.

Further Reading:

Do Big New Brain Projects Make Sense When We Don't Even Know the "Neural Code"?

Why You Should Care about Pentagon Funding of Obama's BRAIN Initiative.

AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins.

Bayes's Theorem: What's the Big Deal?

Are Brains Bayesian?

Can the Singularity Solve the Valentine's Day Dilemma?

Two More Reasons Why Big Brain Projects Are Premature.

Artificial brains are imminent… not!

What’s the Biggest Science News? We’re Still Human, for Ill or Good.

Christof Koch on Free Will, the Singularity and the Quest to Crack Consciousness.

My Testy Encounter with the Late, Great Gerald Edelman.

The Many Minds of Marvin Minsky (R.I.P.)