Skip to main content

How Would AI Cover an AI Conference?

An artificial reporter might not take concerns about superintelligent, sentient machines very seriously

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


I wish I had an artificial-intelligence slave. I mean, assistant. A digital avatar--call it MY AI--that knows (“knows”?) how I think and write, so it can do my job when my mojo is low.

Now, for example. I spent last weekend at New York University listening to philosophers, scientists and engineers jaw about “Ethics of Artificial Intelligence.” How can we ensure that driverless cars, drones and other smart technologies—such as algorithms that decide whether a human gets parole or a loan or has breast cancer--are used ethically? Also, what happens if machines get really smart? Can we design them to be nice to us? Do we have to be nice to them?

Speakers responded to these questions in a welter of ways, as did members of the audience. How should I write it up? Too many choices!


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


I’m a writer, so MY AI has to produce text, even though video reporting would be smarter, as measured by probability of page views. MY AI could transcribe and compress talks by paraphrasing and trimming redundancies, but that wouldn’t be very impressive.

The biggest choice facing MY AI would be whether to take the conference seriously or as light entertainment. If the latter, MY AI could pretend to be a gossip columnist and list, in bold-face, celebrities at the conference, the intellectual equivalents of Kanye and Kim. They included David Chalmers and Ned Block (the conference co-organizers), Daniel Kahneman, Nick Bostrom, Max Tegmark, Eliezer Yudkowsky and Stephen Wolfram. Frances Kamm and Thomas Nagel weren’t scheduled speakers but piped up from the audience.

MY AI could prioritize quotes according to the Google ranking of the speaker and/or buzzwords. It could flag comments that aroused the most audience response, as measured by posture (upright versus slumped), post-talk questions and laughter.

MY AI could produce the equivalent of a gag reel. For example, during one of the countless discussions about trolley choices, someone said any sentient thing is more valuable than any non-sentient thing. Frances Kamm retorted that she would kill a bird to save the Grand Canyon.

Another laugh erupted during Yudkowsky’s talk. While he was emphasizing how hard it might be to turn off a superintelligent machine, a message from the NYU wireless network kept blocking his slides. “I don’t know how to turn off the Internet!” Yudkowsky wailed.

Ha ha. I was even more amused when an IT lady jumped on the stage and, with a couple of keystrokes, solved the AI theorist’s practical problem. But MY AI probably wouldn’t appreciate the irony of that little incident.

Sex seizes eyeballs, so MY AI might focus on Kate Devlin’s talk about sex robots, which are a real thing now, not just sci-fi. If sex robots become so smart that they are likely to be sentient, will we have to grant them rights? (Devlin’s talk provoked the meeting’s oddest Q&A exchange. A man in the audience said masturbation is immoral, so sex robots are too. Devlin, who seemed unsure if the guy was kidding, said dryly that she disagreed with the man’s premise.)

Violence makes good click-bait, too. So MY AI might highlight the talk by Peter Asaro on autonomous weapons, a.k.a. “killer robots.” MY AI, channeling me, might complain that this sci-fi, killer-robot stuff distracts us from a far more pressing issue. Since 9/11 the U.S. and its allies have killed thousands of civilians, including children, with drones, rockets, bombs and bullets. What about the ethics of that?

But perhaps this response is a category error, akin to taking Donald Trump’s “foreign policy” seriously. (There were several allusions to Trump at the conference. His popularity, speakers suggested, shows that many humans aren’t intelligent or ethical. Surely machines can do better.)

MY AI would be aware of my biases and try to overcome them, but not too much, because what am I, after all, but a bundle of biases? Knowing how compulsively skeptical I am, MY AI might focus on comments about how dumb AI programs are, in spite of all the hype. Yann LeCun pointed out that even much-touted deep-learning programs cannot really learn on their own. AI programs still require feedback from humans to know if they’re right or wrong.

Gary Marcus said that in spite of huge advances in hardware and object-recognition, computers lack the common sense of a typical kid. They also lack abstract knowledge of the kind required to make ethical judgments. Any human teenager can tell who the good and bad guys are in a film, but a computer can’t.

MY AI might paraphrase Thomas Nagel’s warning that thousands of years of philosophical inquiry into ethics have produced profound disagreement. So automating ethics might prove elusive.

Daniel Kahneman pointed out that we don’t have any idea how matter generates consciousness. We know we’re conscious, but our confidence that other things are conscious wanes as they become less like us. We are left only with our intuitions. Therefore, MY AI might add, all the talk about how we should minimize the suffering of sex robots and other intelligent machines could be moot.

Bostrom and other speakers dwelled on possible misalignments between our human goals and the goals of superintelligent, sentient machines. These concerns are so hypothetical, MY AI might argue, that they’re silly. We should be more worried about the misalignment of goals between ordinary citizens and the powerful corporations and government agencies developing AI right now.

MY AI, if it wants to get really serious, might construct a tough, neo-Marxist argument about how AI will help its already powerful creators become still more powerful. MY AI would expand upon Wendell Wallach’s concern that AIs will reflect the bias of the engineers who build them. MY AI might go further, warning that AIs are likely to reflect the values of our militarized, capitalist, racist, sexist culture.

But who would want to read that? MY AI, if it’s smart, will go with trolley gags and sexbots.

Further Reading:

Meta-post: Horgan Posts on Brain and Mind Science

Dispatch from the Desert of Consciousness Research, Part 1

Flashback: My Report on First Consciousness Powwow in Tucson. How Far Has Science Come Since Then?

Can Integrated Information Theory Explain Consciousness?

Are Brains Bayesian?

The Singularity and the Neural Code

Why information can't be the basis of reality

Is Scientific Materialism "Almost Certainly False"?

AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins

Scott Aaronson Answers Every Ridiculously Big Question I Throw at Him

Christof Koch on Free Will, the Singularity and the Quest to Crack Consciousness