Skip to main content

The Organic Automaton

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


When will computers become living, sentient beings? In movies, it is commonly depicted as an abrupt, unforeseen epiphany. Ray Kurzweil has predicted (in our pages and elsewhere) that personal computers will be able to run real-time, full-up simulations of the human brain by the 2020s. But life and consciousness are matters of degree. Neuroscience case studies show how very basic ways of perceiving the self can be knocked out, slowly degrading consciousness. Research on the origins of life suggest there is a spectrum between life and not-life. By analogy, computers will start to come alive gradually, and it seems likely they have already started, almost unbeknownst to us. Where can we look for signs of the transition?

People have criticized the Kurzweilian vision on the grounds that Moore's law doesn't apply to software. Few things bring out one's natural eloquence more than the opportunity to complain about buggy, unstable computers. One of my favorite quotes, from a decade ago:

Software can easily rate among the most poorly constructed, unreliable and least maintainable technological artifacts ever invented by man -- with perhaps the exception of Icarus' wings.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


People keep predicting software is at a breaking point and that radical steps to fix it are inevitable; we just won't stand for it anymore. Offered little more than some belated mea culpas and incremental improvement, we have basically stood for it.

But I'd like to offer the puckish thought that software unreliability does not hinder the development of living machines, but advances it. To be alive, you have to be capable of dying. And computers have gotten better at dying -- not just break down, but die in the same way organisms die of old age. Cellular machinery appears to be capable of lasting almost forever; the limiting factor is the DNA, be it shortened telomeres, accumulated DNA rings, or mutations in the wrong place. Similarly, solid-state computer hardware hardly ever dies anymore; the limiting factor is the software. Systems get so gunked up with corrupted files that it's cheaper and faster to buy a whole new one than to try to root out the problem. Charities won't even accept donations of computers with outdated OS's, even if all they need them for is a simple application such as word processing.

The baroque complexity of software is another telling sign. Time was when a software engineer could fully grasp what was going on in the processor. As one user commented recently on Ed Foster's blog at InfoWorld:

...many projects are now beyond the scope of a single individual to comprehend - meaning that 'minor' changes in one are can't possibly be correlated to effects in some other area because no one person understands the relationships.

Is software really "designed" anymore? Ideally, a team sits down with specifications and writes code, but often the specs are incompletely thought out, and even when they are not, the subsequent development process resembles evolution more than creation. It is said that Microsoft has tried to rewrite Word from scratch and found it was too hard. The programs head into unpredictable environments and enter into a web of relationships, some of them malevolent (as in the case of viruses). To understand their behavior from first principles is close to impossible.

One school of thought for improving software reliability is to make computers more like computers -- to design and test them more rigorously, so that their correct operation is as preordained as the solution to an equation. Another school, however, is to make computers more like organisms -- to ensure that when they fail, they fail gracefully, stumbling along despite their imperfections. As I wrote in 2001, machines can be built from cell-like components that can be killed when they go awry, as some surely will. A 2003 article in our pages argued that the key is not to prevent crashes but to recover from them quickly and cleanly.

IBM has been promoting autonomic computing, which goes a step further and seeks to make machines self-aware, so they can monitor and fix themselves. Some people advocate getting away from a mechanistic approach to software (namely, the algorithm) in favor of a more organic one, like Rodney Brooks's subsumption model for robotics, in which concurrent processes react to stimuli rather than follow a definite set of procedures. The machine's behavior is not deterministic, but emergent.

Computer scientists invoke biological metaphors so frequently that maybe they aren't just metaphors. Dystopian futurists used to predict that people would become more like machines, but the reverse is happening. The next time you're waiting on the spinning color wheel or hourglass, spare some thought for the creature struggling within.