Skip to main content

The Erosion of Reality

The most immediate AI threat may be the distortion of truth; something we, and other species, have been doing for a long time

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Let me say this upfront: I'm not convinced that 'superintelligent' AI are the most pressing threat from coming generations of deep learning machines. Indeed, the entire notion of superintelligence may be nothing more than a philosophical 'what if' hypothesis. We simply do not know whether such a thing can in fact be made, developed, or evolved into existence - here on Earth or elsewhere in the cosmos.

Right now we don't even have a convincing quantitative theory of intelligence. One that both tells us what we really mean by intelligence ('oh look, it can open a can of beans') and tells us how intelligence actually scales with complexity, and whether or not there is a theoretical maximum.

It could be that intelligence follows an S-like curve-of-growth (a logistic function), like so many natural (and unnatural) phenomena. A logistic function or curve can start out with exponential growth, but then flattens or plateaus out as things saturate. A simple example is idealized population growth, where a rapid increase in the number of organisms plays off against the availability of food or resources, ultimately leveling off.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


It's not hard to imagine that for intelligence there might indeed be a regime of exponential growth, but this is going to demand exponential growth in complexity, and perhaps most critically in the efficiency, connectivity, intercommunication, and data flow of the parts (be they biological or machine). So it's also not hard to see that intelligence as we presently know it could saturate.

But where is the exponential part of the curve? Is it in front of us, or behind us already? And will our machines, our AIs, saturate sooner than we might suppose? I think we're currently in a place of profound ignorance. For all we know, human intelligence is already close to the universal maximum possible. I suspect the answers can only come from experiment, or the development of that fundamental theory of intelligence.

Thus, any proposals for alien intelligence to be 'superintelligent' agents should be taken as provocative guesswork at this stage. Indeed, my guess is that it's more likely that the most abundant types of intelligences in the universe are 'savant' machines - the cousins of our present specialists like Google's AlphaGo and its descendants.

For all of these reasons I'm not as worried about super smart AIs here on Earth as I am about comparatively dumb AIs whose purpose (or incidental ability) is to manipulate our relationship with information; with facts, and with reality as we perceive it. That could be the greatest threat.

Using techniques like adversarial learning we're already witnessing AI that can mimic our voices to near perfection. Similar approaches could presumably be applied to our style of writing, texting, and social media posting. Spoofing our appearance in photos, or generating video in which we seemingly do things we never actually did are also in the pipeline (like this extraordinary fake clip of President Obama). 

These systems are likely to go further (if they haven't already, it's hard to keep up with developments). Why not generate entire news stories or gossip columns with an AI? Hollywood tabloids hardly require facts anyway, and mainstream news outlets sometimes seem to follow suit.

There is extraordinary potential for misleading any of us. Stealing our personal details by fooling us, or creating an alternate version of us that undertakes any manner of antisocial, even criminal acts - leaving us the target of retribution or the legal system. Or simply manipulating us to predispose us to wanting certain goods, or voting a certain way, or believing certain things. The first evangelical AI will easily outdo even the most outrageous human preachers.

And unlike hypothetical superintelligences (whose motivations are hard to imagine), using AIs to exploit people or societies follows an ancient pattern.

We humans have arguably been eroding our own reality from the moment our hominin ancestors and cousins started communicating and storytelling. A good fireside tale may help maintain a verbal history, or articulate moral and social rules, helping bring cohesion to our families and groups. But it can, perhaps inevitably, mislead, distort, and manipulate.

This behavior isn't even confined to our 'intelligent' species. Deception is splashed across the natural world with abandon. Animals camouflage themselves, or pretend to be things they aren't - from mimicking the looks of poisonous species to puffing up feathers, scales, or skin, or parasitically dumping offspring for other species to worry over. Males of many species adorn themselves or build alluring structures, and resort to utter subterfuge in the effort to propagate their genes. Cheating seems to be as much part of Darwinian selection as does honesty. There is a measure of evolutionary fitness in the ability to mislead. 

The odds don't seem to be high for machines to be any better, whether by design or by their own selective pressures - if winning is all that matters, well, watch out.

Of course, as the physicist Niels Bohr said, good predictions are awfully difficult to make, especially when they're about the future. But one thing is sure, we're going to be learning a lot about the trajectory that intelligence may take anywhere in the universe - assuming we can see the truth at all.