Skip to main content

Can the Turing Test Help Us Know Whether a Machine Is Really Thinking?

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Chillin' with my children in London recently, I kept a lookout for blog topics, and I found one: "Codebreaker: Alan Turing's Life and Legacy," an exhibit at the city's Science Museum. The British mathematician Turing, born exactly a century ago, laid down the theoretical foundations of computer science and helped design one of the first computers, the Automatic Computing Engine, or ACE. During World War II, he helped crack the German Enigma code, a vitally important achievement for the Allied war effort. British authorities rewarded Turing by arresting him for homosexuality in 1952 and forcing him to undergo "chemical castration," which involved injections of estrogen. In 1954 Turing killed himself by ingesting cyanide.

I want to focus not on Turing's tragic demise but on one of his enduring contributions to philosophy. In his era, scientists and philosophers—as well as sci-fi writers--were already pondering whether computers were just calculating devices, like complicated abacuses, or can "think" more or less as we humans do. In a 1950 article, "Computing Machinery and Intelligence," Turing proposed a simple empirical method—which he called "the imitation game" but is now called "the Turing test"--for resolving the question. In one room is a human "interrogator," and in other rooms are two "competitors," one a human and the other a computer. The interrogator types out questions that are transmitted to the competitors. (Today, of course, voice-recognition technology has become good enough for questions to be submitted orally.) If the interrogator can't tell which answers come from the human and which from the computer, then the computer must be thinking. Proponents of "strong AI" argue that such a computer isn't just mindlessly, mechanically, cranking out answers; it possesses subjective awareness, just as we do.

The philosopher John Searle presented a famous challenge to the Turing test, called the Chinese room experiment, in 1980. Searle compared a computer undergoing a Turing Test to an English-speaking man in a room who doesn't understand Chinese but has a manual for converting Chinese questions or commands into appropriate Chinese responses. The man receives a string of Chinese characters that, unbeknownst to him, means, let's say, "What is your favorite color?" His manual tells him that when he receives these symbols, he should respond with another string of symbols that, again unbeknownst to him, means "blue." In the same way, Searle contended, computers mindlessly manipulate symbols without understanding their meaning; computers are not really thinking as we humans do.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


To my mind, Searle has not rebutted the strong AI position with his thought experiment. Instead, he has merely pointed out, implicitly, how difficult it would be for a computer to pass the Turing test. A manual that could list all possible questions that can be stated in Chinese, together with plausible-sounding responses, would be almost infinitely long. How could the man possibly respond to incoming questions fast enough to convince those outside the room that he actually understands Chinese? If he pulls off this feat—perhaps by tossing in a joke, like, "I'm a Chinese communist, so my favorite color is red!"--you might reasonably conclude that he does in fact understand Chinese, even if he insists he doesn't. You might reasonably conclude the same thing about a computer, if it can answer all your questions as rapidly and quirkily as an intelligent human individual. (The speed issue can cut both ways. As Turing pointed out, one quick way to distinguish an ordinary human from a computer would be to ask the competitors to add 34,957 to 70,764.)

Here is the more basic flaw of Searle's argument: His argument assumes that in some cases we just know whether another creature—like the man trying to decode Chinese--is really capable of the subjective state that we call "understanding." But we never know for sure, because of the solipsism problem, which stems from the fact that no sentient entity has direct, first-hand access to and hence knowledge of the subjective state of any other sentient entity. As I wrote recently in a column on cats, each of us is sealed in the chamber of his subjective consciousness. I can't be sure that you, reader, or any other human, let alone a bat or cat or iPhone or toaster oven, is truly conscious. All I can do is make reasonable assumptions based on the behavior of such entities. That is the whole point of the Turing test. To the extent that their behavior resembles mine, I grant that they're probably conscious, because I know I'm conscious.

In his 1950 essay Turing acknowledged that, strictly speaking, the only way to be sure a machine thinks "is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult."

Although I reject Searle's objection to the Turing test, I have an objection—or reservation--of my own, which comes from my observation that we humans are awfully prone to anthropomorphism, the projection of human characteristics onto non-human and even inanimate things. This tendency stems from what psychologists call our theory-of-mind capacity, our innate ability—which manifests itself in most of us by the age of three or so--to intuit the states of mind of others. The theory of mind is vital for our social development; autistics are believed to lack the capacity. But many of us have the opposite problem. Our theory-of-mind capacities are so strong that we impute human intelligence, intentions and emotions even to non-human things, like cats, cars and computers.

This phenomenon provides the subtext of the iPhone ads showing the actor John Malkovich flirting with Siri, the iPhone program. Malkovich clearly likes--I mean, really likes—Siri! He laughs at her joke! Tells her she's funny! But she's not real! She's just a piece of software! Ha ha! (See this spoof of the iPhone Malkovich ad, in which Siri keeps telling more and more outrageous jokes to make the stone-faced Malkovich laugh.)

The Siri ad may seem silly, but our tendency to anthropomorphize machines is quite real. In her classic 1979 book about AI, Machines Who Think, Pamela McCorduck described a scene that took place in an AI lab at Stanford in the 1970s, when a visiting Russian scientist had a typed conversation with a computer program named ELIZA, which was designed to mimic a psychotherapist. ELIZA's responses involved simply turning statements of the human patient back into leading questions. For example, if you said, "I'm feeling a little anxious lately," ELIZA would ask, "Why are you feeling a little anxious lately?"

The exchange at Stanford began with ELIZA asking the Russian, "What brought you here to see me today?" The Russian replied, "Oh, nothing much, I'm feeling a little tired, that's all." Before long, as McCorduck and several other scientists watched, the Russian began pouring out his heart to ELIZA, confessing his concerns for his wife and children. "We watched in painful embarrassment," McCorduck wrote, "trying hard not to look, yet mesmerized all the same." The Turing test, in other words, says more about our minds than it does about the mind—or lack thereof—of a computer. This is not to say that a computer can't think. It is only to say that, no matter how far machines progress, we may never know what, if anything, it is like to be a machine.

Postscript: I highly recommend Turing's 1950 essay. Check out in particular the section in which Turing discussed how extrasensory perception might complicate the Turing test. The evidence for ESP, Turing asserted, is "overwhelming." "If telepathy is admitted," he wrote, "it will be necessary to tighten our test up. The situation could be regarded as analogous to that which would occur if the interrogator were talking to himself and one of the competitors was listening with his ear to the wall. To put the competitors into a 'telepathy-proof room' would satisfy all requirements." I wish telepathy were real, because it would represent a breach in our solipsistic isolation from each other. But I'm a psi skeptic.

Post Postscript: This post incorporates material that originally appeared in my 1999 book The Undiscovered Mind. I mention this fact because of the brouhaha that has erupted over journalist Jonah Lehrer's reuse of past writings, which some idiots have called "self-plagiarism." I recycle stuff all the time on this blog and elsewhere. Sometimes I mention the original source, if I think readers might like to know it, sometimes I don't. Before the Lehrer tempest, I wouldn't have mentioned that some material in this post appeared in a book that was published 13 years ago and that not many people read. I would have thought, Who cares? If anything, I would have worried that readers would think I was plugging old product, not upholding some lofty ethical standard. But now, apparently, in addition to everything else freelance journalists have to worry about these days—cranking out more and more words for less and less moola, as my pal Robert Wright points out--they also have to fear being accused of "self-plagiarism" by self-appointed ethics cops. Yeesh.

Post Post Postscript: Brian Hayes has an interesting article in this month's American Scientist on how AI has progressed by adopting a brute-force approach to solving problems like language-recognition and abandoning the goal of replicating human cognition. http://www.americanscientist.org/issues/pub/2012/4/the-manifest-destiny-of-artificial-intelligence/1 Full disclosure: Hayes is a former Scientific American editor who wore the nastiest review of one of my books (The End of Science) that I've ever gotten.

Post Post Post postscript: Jonah Lehrer has now admitted to having fabricated quotes--from Bob Dylan, of all people!. Unforgivable. http://www.tabletmag.com/jewish-news-and-politics/107779/jonah-lehrers-deceptions

Illustration credit John Liberto.