About the SA Blog Network



Critical views of science in the news
Cross-Check Home

Can the Turing Test Help Us Know Whether a Machine Is Really Thinking?

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

Chillin’ with my children in London recently, I kept a lookout for blog topics, and I found one: “Codebreaker: Alan Turing’s Life and Legacy,” an exhibit at the city’s Science Museum. The British mathematician Turing, born exactly a century ago, laid down the theoretical foundations of computer science and helped design one of the first computers, the Automatic Computing Engine, or ACE. During World War II, he helped crack the German Enigma code, a vitally important achievement for the Allied war effort. British authorities rewarded Turing by arresting him for homosexuality in 1952 and forcing him to undergo “chemical castration,” which involved injections of estrogen. In 1954 Turing killed himself by ingesting cyanide.

I want to focus not on Turing’s tragic demise but on one of his enduring contributions to philosophy. In his era, scientists and philosophers—as well as sci-fi writers–were already pondering whether computers were just calculating devices, like complicated abacuses, or can “think” more or less as we humans do. In a 1950 article, “Computing Machinery and Intelligence,” Turing proposed a simple empirical method—which he called “the imitation game” but is now called “the Turing test”–for resolving the question. In one room is a human “interrogator,” and in other rooms are two “competitors,” one a human and the other a computer. The interrogator types out questions that are transmitted to the competitors. (Today, of course, voice-recognition technology has become good enough for questions to be submitted orally.) If the interrogator can’t tell which answers come from the human and which from the computer, then the computer must be thinking. Proponents of “strong AI” argue that such a computer isn’t just mindlessly, mechanically, cranking out answers; it possesses subjective awareness, just as we do.

The philosopher John Searle presented a famous challenge to the Turing test, called the Chinese room experiment, in 1980. Searle compared a computer undergoing a Turing Test to an English-speaking man in a room who doesn’t understand Chinese but has a manual for converting Chinese questions or commands into appropriate Chinese responses. The man receives a string of Chinese characters that, unbeknownst to him, means, let’s say, “What is your favorite color?” His manual tells him that when he receives these symbols, he should respond with another string of symbols that, again unbeknownst to him, means “blue.” In the same way, Searle contended, computers mindlessly manipulate symbols without understanding their meaning; computers are not really thinking as we humans do.

To my mind, Searle has not rebutted the strong AI position with his thought experiment. Instead, he has merely pointed out, implicitly, how difficult it would be for a computer to pass the Turing test. A manual that could list all possible questions that can be stated in Chinese, together with plausible-sounding responses, would be almost infinitely long. How could the man possibly respond to incoming questions fast enough to convince those outside the room that he actually understands Chinese? If he pulls off this feat—perhaps by tossing in a joke, like, “I’m a Chinese communist, so my favorite color is red!”–you might reasonably conclude that he does in fact understand Chinese, even if he insists he doesn’t. You might reasonably conclude the same thing about a computer, if it can answer all your questions as rapidly and quirkily as an intelligent human individual. (The speed issue can cut both ways. As Turing pointed out, one quick way to distinguish an ordinary human from a computer would be to ask the competitors to add 34,957 to 70,764.)

Here is the more basic flaw of Searle’s argument: His argument assumes that in some cases we just know whether another creature—like the man trying to decode Chinese–is really capable of the subjective state that we call “understanding.” But we never know for sure, because of the solipsism problem, which stems from the fact that no sentient entity has direct, first-hand access to and hence knowledge of the subjective state of any other sentient entity. As I wrote recently in a column on cats, each of us is sealed in the chamber of his subjective consciousness. I can’t be sure that you, reader, or any other human, let alone a bat or cat or iPhone or toaster oven, is truly conscious. All I can do is make reasonable assumptions based on the behavior of such entities. That is the whole point of the Turing test. To the extent that their behavior resembles mine, I grant that they’re probably conscious, because I know I’m conscious.

In his 1950 essay Turing acknowledged that, strictly speaking, the only way to be sure a machine thinks “is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult.”

Although I reject Searle’s objection to the Turing test, I have an objection—or reservation–of my own, which comes from my observation that we humans are awfully prone to anthropomorphism, the projection of human characteristics onto non-human and even inanimate things. This tendency stems from what psychologists call our theory-of-mind capacity, our innate ability—which manifests itself in most of us by the age of three or so–to intuit the states of mind of others. The theory of mind is vital for our social development; autistics are believed to lack the capacity. But many of us have the opposite problem. Our theory-of-mind capacities are so strong that we impute human intelligence, intentions and emotions even to non-human things, like cats, cars and computers.

This phenomenon provides the subtext of the iPhone ads showing the actor John Malkovich flirting with Siri, the iPhone program. Malkovich clearly likes–I mean, really likes—Siri! He laughs at her joke! Tells her she’s funny! But she’s not real! She’s just a piece of software! Ha ha! (See this spoof of the iPhone Malkovich  ad, in which Siri keeps telling more and more outrageous jokes to make the stone-faced Malkovich laugh.)

The Siri ad may seem silly, but our tendency to anthropomorphize machines is quite real. In her classic 1979 book about AI, Machines Who Think, Pamela McCorduck described a scene that took place in an AI lab at Stanford in the 1970s, when a visiting Russian scientist had a typed conversation with a computer program named ELIZA, which was designed to mimic a psychotherapist. ELIZA’s responses involved simply turning statements of the human patient back into leading questions. For example, if you said, “I’m feeling a little anxious lately,” ELIZA would ask, “Why are you feeling a little anxious lately?”

The exchange at Stanford began with ELIZA asking the Russian, “What brought you here to see me today?” The Russian replied, “Oh, nothing much, I’m feeling a little tired, that’s all.” Before long, as McCorduck and several other scientists watched, the Russian began pouring out his heart to ELIZA, confessing his concerns for his wife and children. “We watched in painful embarrassment,” McCorduck wrote, “trying hard not to look, yet mesmerized all the same.” The Turing test, in other words, says more about our minds than it does about the mind—or lack thereof—of a computer. This is not to say that a computer can’t think. It is only to say that, no matter how far machines progress, we may never know what, if anything, it is like to be a machine.

Postscript: I highly recommend Turing’s 1950 essay. Check out in particular the section in which Turing discussed how extrasensory perception might complicate the Turing test.  The evidence for ESP, Turing asserted, is “overwhelming.” “If telepathy is admitted,” he wrote, “it will be necessary to tighten our test up. The situation could be regarded as analogous to that which would occur if the interrogator were talking to himself and one of the competitors was listening with his ear to the wall. To put the competitors into a ‘telepathy-proof room’ would satisfy all requirements.” I wish telepathy were real, because it would represent a breach in our solipsistic isolation from each other. But I’m a psi skeptic.

Post Postscript: This post incorporates material that originally appeared in my 1999 book The Undiscovered Mind. I mention this fact because of the brouhaha that has erupted over journalist Jonah Lehrer’s reuse of past writings, which some idiots have called “self-plagiarism.” I recycle stuff all the time on this blog and elsewhere. Sometimes I mention the original source, if I think readers might like to know it, sometimes I don’t. Before the Lehrer tempest, I wouldn’t have mentioned that some material in this post appeared in a book that was published 13 years ago and that not many people read. I would have thought, Who cares? If anything, I would have worried that readers would think I was plugging old product, not upholding some lofty ethical standard. But now, apparently, in addition to everything else freelance journalists have to worry about these days—cranking out more and more words for less and less moola, as my pal Robert Wright points out–they also have to fear being accused of “self-plagiarism” by self-appointed ethics cops. Yeesh.

Post Post Postscript: Brian Hayes has an interesting article in this month’s American Scientist on how AI has progressed by adopting a brute-force approach to solving problems like language-recognition and abandoning the goal of replicating human cognition. Full disclosure: Hayes is a former Scientific American editor who wore the nastiest review of one of my books (The End of Science) that I’ve ever gotten.

Post Post Post postscript: Jonah Lehrer has now admitted to having fabricated quotes–from Bob Dylan, of all people!. Unforgivable.

Illustration credit John Liberto.

John Horgan About the Author: Every week, hockey-playing science writer John Horgan takes a puckish, provocative look at breaking science. A teacher at Stevens Institute of Technology, Horgan is the author of four books, including The End of Science (Addison Wesley, 1996) and The End of War (McSweeney's, 2012). Follow on Twitter @Horganism.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 7 Comments

Add Comment
  1. 1. spcraft 7:41 am 07/13/2012

    While I agree that the Chinese Room argument isn’t a bulletproof rebuttal of strong AI, the Turing Test also isn’t a good test *for* strong AI. At its core, the Test is based on deception: a machine can be said to “think,” by Turing’s definition, if it’s programmed cleverly enough to fool the judge into believing that it, the machine, is thinking. And while you rightly claim that we can’t know the subjective consciousness of another sentient being, we *can* know the “mind” of a computer, in the same way that we can know the “mind” of a toaster: we can know, to a very high degree of precision, every single component of its thinking apparatus. We can track every logic gate, watch every flip-flop, and read every line of code. We know exactly what’s going on inside a computer, and we can say definitively — because we designed it — that it is not consciousness.

    Of course, you could make a similar argument for the human mind. If we had a sufficiently detailed map of the mind, we could feasibly track every neuron, and maybe we could find the pattern of synaptic connections that corresponds to consciousness (or at least memory or knowledge or perception). But the human mind is orders of magnitude more complex than a CPU, vastly more subtle and nuanced. Consciousness arises *somewhere* in this complicated tangle of brain-stuff, and while we can’t (yet) say where that is, we *can* say that computers lack a similar complexity, and lack consciousness. By relying on a single, simple criterion — conversation — the Turing Test fails to address this underlying complexity.

    My own post-script: I completely agree that the self-plagiarism thing is absurd, but you mentioning the original source *does* make me want to go find a copy of The Undiscovered Mind. :)

    Link to this
  2. 2. jimh 1:33 pm 07/13/2012

    I’ve never been too impressed by the Turing Test. Imagine a program capable of passing the test has been created. I sit in front of a computer running this software, I initiate a conversation, and I’m duly impressed by the system’s responses. I get up and leave the room. What is the machine doing doing now?

    Link to this
  3. 3. rgcorrgk 2:38 am 07/14/2012

    A computing “intelligence”, a type of thinking, or, problem solving (tree roots find water, ants solve problem, group of humans find the Higgs) are overlapping activities in many ways; and of course, they are what you define them. The question is where the line is drawn. We (and a number of living creatures) have something we call intelligence. On the other side, an abacuses or simple computer is a tool – made by humans for problem solving, and is an extension of intelligence not an example of intelligence. Given some fancy high powered computer could be operated so as to fool a human into thinking it is human is, excuse the bluntness, a stupid place to draw a line (also, there is no evidence for ESP or telepathy, sorry, both are pure nonsense).
    •Some 4o or so years ago, after a bit of thought, it came to me where this line lies! It is a bright demarcation line.

    Richard Carlson

    Link to this
  4. 4. DonPaul 10:14 pm 07/14/2012

    The Turning Test is no test at all as it has no physical reference. Searle’s objections to thinking machines are no better for the very same reasons. In fact, the whole enterprise of “AI”, strong or not, is bankrupt since no understanding of what exactly thinking is has been tackled in the first place. If you would like a fresh take on this old conundrum Mr. Horgan, you could read (carefully and thoughtfully) my book “Mind Made Real” (ps you will find your text, “The Undiscovered Mind” listed in the bibliography.)

    Link to this
  5. 5. subterra 12:44 am 07/15/2012

    “Won’t you have a nice toasted muffin? You know, you really should try a muffin. How about a croissant?”

    “Are we having fun yet?”

    Link to this
  6. 6. driwatson 9:59 pm 07/15/2012

    Good article. There’s another reason to reject Searle’s Chinese Room argument. In Turing’s 1950 paper Computing machinery and intelligence he calls his test for machine intelligence “The Imitation Game.” Subsequently this has become known as the Turing Test. I don’t believe Turing called it the Imitation Game by accident. It is a a test to see is a computer can “imitate” intelligence. Turing never claims the computer will “be” intelligent, but rather its behavior will seem intelligent to an observer.
    For some reason Searle, and to be fair the entire AI community, lost this distinction when the they started calling it the Turing Test. There’s more about Turing’s remarkable legacy in my SciAm blog here:

    Link to this
  7. 7. Mark Pine 1:21 pm 07/20/2012

    Apparently, Horgan despairs at knowing the true nature of a machine’s or another person’s internal state, whether it is conscious or not. His quandary, however, results from a mistaken notion of what consciousness is. In his graphic description, the “sealed chamber of the subjective consciousness,” he goes a long way toward clarifying his misconception.

    Like Tufts University philosopher Daniel Dennett, who explained consciousness in terms of the metaphor of “the movie in the brain“, Horgan seems to think of consciousness as thing located inside a human brain or, possibly, a computer casing. However, consciousness, is not that kind of thing.

    Imagine two people in deep conversation. They discuss an idea important to both of them. As they converse, each gains a better understanding of the other person’s point of view. Each forms a mental picture of what the other is experiencing consciously. As the dialogue progresses, as long as it continues, their mental pictures of the ideas and feelings in the other’s mind become the same picture. The consciousness of one comes to resemble that of the other, a single picture that they share. The deeper their understanding, the more their consciousnesses overlap and merge.

    The consciousnesses of two such people do not stay confined in the separate chambers of their individual brains but spread between them and join. I believe that consciousness behaves like wave functions in quantum mechanics. A person’s consciousness is associated with his human brain in the same way that a wave function is associated with a particle, as described in quantum mechanics. Consciousness, like a mathematical wave function, is a subjective and non-material thing. A brain, on the other hand, is material; it is made of particles of matter. I believe that the relationship, described in QM, of a wave function and its associated particle also describes the relationship of an individual consciousness and its associated brain.

    Machine consciousness, it seems to me, is not fundamentally different from human consciousness. Machines like brains, and everything else that exists, are associated with wave functions. At some future time, when machines become capable of carrying on extended, deep, and wide ranging discussions with their human counterparts, skepticism about machine consciousness will fade. Computers, like human beings, will simply be part of the conversation.

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article