Skip to main content

What’s Still Lacking in Artificial Intelligence

AIs can learn, and they can beat humans at sophisticated games—but they don’t have the faculty of judgment

Credit:

Getty Images

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Before we can design ethical artificial intelligence, regulate AI appropriately or allocate tasks to the right systems, we need to know what AI is. How do machines think now and what can we expect in the future? Which tasks are suited for AI, which ones are not, and why? To answer such questions, we need a nuanced understanding of different kinds of intelligence.

AI’s original take on intelligence can be traced back to Thomas Hobbes’s maxim “Reason ... is nothing but reckoning.” Interpreted as the manipulation of symbolic representations, this idea gave rise to the first generation of AI­—dubbed Good Old-Fashioned AI, or GOFAI, by the late philosopher John Haugeland. A different approach to intelligence underlies contemporary deep-learning systems and other forms of second-wave AI—the systems achieving such stunning results in game-playing, facial recognition, medical diagnosis and the like.

Because both approaches involve the manipulation of representations presented to them as input, first- and second-wave AI are both still forms of reckoning. But they differ in their treatment of representation. GOFAI had its roots inclassic syllogisms such as “All men are mortal; Socrates is a man; therefore, Socrates is mortal.” Logical inference systems and theorem provers were of this type—programs designed for deep, many-step inference over a small number of strongly correlated variables, based on relatively modest amounts of information.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Second-wave AI systems do the opposite: shallow inference over a very large number of very weakly correlated variables, based on massive amounts of data. It is the latter approach that is allowing computers to recognize friends’ faces, the setting sun and oncoming cars.

The representations in second-wave AI are often called “distributed,” because information about many commonsense features of the world is spread out across these systems’ internal networks. Second-wave systems are also capable of retaining staggering amounts of detail rather than reducing their inputs to simple propositional statements such as “this is an apple” or “that is Lyme disease.” Beyond classifying an x-ray as showing lung cancer, for example, a second-wave AI system could capture the tumor’s density, contrast, shape and other features, all potentially relevant to drug choice or predicted outcome.

Why does this approach work better? Why does second-wave AI excel where GOFAI stumbled? The answer is ontological. How we humans parse the world into objects, properties, relations, and so on—how, as I will say, we register the world—is partially determined by our interests, our culture, our communities, our projects. The uninterpreted world is supremely messy. The objects and properties in terms of which we conceptualize our experience cannot be taken as axioms or directly “read off” this profusion, as is simplistically assumed in first-wave AI.

Rather, registering the world in relevant ways—and retaining the micro details and nuances that warrant those registrations—is an achievement of intelligence. In some cases, such as game playing, second-wave systems are learning to do this on their own. In others, we humans label the data first (“that is a stop sign” or “that is an ocelot”), and the systems learn to mimic us. But overall, this kind of pattern matching and classification is an undeniable strength of second-wave technologies.

What, then, of the human case? Will second-wave AI, amplified by faster processors, more data and better algorithms, reach AI’s holy grail of artificial general intelligence, resulting in systems equal to or surpassing humans?

No, it will not.

At their best, humans have judgment—by which I mean a seasoned ability for open-minded, deliberative thought, forged over thousands of years and in diverse cultures as a foundation for rationality, ethics and considered action. This is what we get at when we say someone “has good judgment.” It is a capacity we strive to instill in our children, a standard to which we hold adults and to which human thinking must ultimately aspire.

Judgment requires not only registering the world but doing so in ways appropriate to circumstances. That is an incredibly high bar. It requires that a system be oriented toward the world itself, not merely the representations it takes as inputs. It must be able to distinguish appearance from reality—and defer to reality as the authority.

There have to be stakes, real-world threats to which the system is vulnerable. A system with judgment must care about what it is thinking about, must be willing to go to bat for the truth, must, as Haugeland said, “give a damn.”

These abilities imply an understanding of, and commitment to, the world as a whole—a single, encompassing totality. No reckoning system understands what it is talking about; to do so, it would have to hold every object, property or state of affairs accountable to being in the world. If I were so much as to begin to think that a cup of coffee in front of me had spontaneously leaped two inches up into the air, for example—if my perceptual system were to deliver that hypothesis to my cortex—I would not believe it, would not take the evidence as compelling.

Instead I would suspect that I had blinked without realizing it, that someone had jostled my desk, that what I drank a moment ago wasn’t coffee or that an earthquake was underway. That is, I would recognize that something impossible had seemed to happen, but because impossible things do not happen, I would take that apparent impossibility as evidence of a misunderstanding or mistake. And I would seek to repair that mistake. It may be that my registration scheme has failed to do the world justice and needs to be replaced.

In order to deal appropriately with context, in other words, it is not enough simply to have a world model, a causal model or some other “preregistered” conceptual scheme. There is no total world model; there is no universal conceptual scheme. This is the ontological lesson of second-wave AI. Judgment, then, not only holds registered phenomena accountable to being in the world; it also holds registration schemes themselves accountable. Such schemes have to make the world intelligible.

To have judgment is to be able to assess any applicable registration scheme—dynamically, continuously and ever vigilantly. Think of why we entrust childcare to adults. A child, after allowing a catastrophe, might argue, “But I did everything you said!” Not everything that is important can be said. No litany of saying, however extensive—no finite world model—can capture everything that is relevant.

Judgment, in sum, requires being existentially committed to the world, accountable to its reality and defended against the false or impossible. A system with judgment treats the world that hosts it, the entities it registers and the entire embedding society with deference and humility.