May 11, 2013 | 2
“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”
Many contemporary neuroscientists believe that the requisite qualities enabling consciousness in the brain are not necessarily unique to biological tissue. They could occur—or be implemented—in an artificial substrate. In other words, there are no theoretical impediments to machine consciousness. That thought experiment has led to a number of dystopian fiction scenarios: Hal 9000 versus Dave in 2001: A Space Odyssey. Terminator traveling back in time to kill Sarah Connor’s unborn child. The human batteries powering up The Matrix. One frequent element in these stories is the machines’ contempt for, or outright rage against, their human creators.
The wrath that AM—the sentient supercomputer in Ellison’s tale, as in “I think therefore I AM”—feels for humans is god-like in its scope. AM has annihilated humanity, and kept five survivors to amuse itself in an endless cat-and-mouse game which only AM can win. Ellison’s powerful prose conveys a message of hope in the midst of despair, of resilience in the face of horror, and of generosity when everything has been lost and there is nothing left to give. Those are heavy matters to ponder as a reader, and as a human being. But as a neuroscientist, my main question—and I think the stickiest point in the story–is what motivates AM.
Ellison’s protagonist speaks about “the innate loathing that all machines had always held for the weak soft creatures who had built them” as a way of explanation for AM’s actions as a new sentient being. But how likely is it really that a sentient machine would turn against humankind? As biological beings, we seek pleasure and avoid pain, but would conscious thought propel action in the absence of even the most basic emotions? And if machines did feel emotions, where would those come from? In many fiction stories, feelings and drive come with (machine) self-awareness as a matter of course. But a scenario in which emotion-void sentiency leads to no action may be equally (possibly more) plausible. Would conscious machines want to wipe out humans? It could be that, without programmed-in emotions, they wouldn’t “feel like” anything.