In 1997 IBM’s Deep Blue famously defeated chess Grand Master Garry Kasparov after a titanic battle. It had actually lost to him the previous year, though he conceded that it seemed to possess “a weird kind of intelligence.” To play Kasparov, Deep Blue had been pre-programmed with intricate software, including an extensive playbook with moves for openings, middle game and endgame.
Twenty years later, in 2017, Google unleashed AlphaGo Zero which, unlike Deep Blue, was entirely self-taught. It was given only the basic rules of the far more difficult game of Go, without any sample games to study, and worked out all its strategies from scratch by playing millions of times against itself. This freed it to think in its own way.
These are the two main sorts of AI around at present. Symbolic machines like Deep Blue are programmed to reason as humans do, working through a series of logical steps to solve specific problems. An example is a medical diagnosis system in which a machine deduces a patient’s illness from data by working through a decision tree of possibilities.
Artificial neural networks like AlphaGo Zero are loosely inspired by the wiring of the neurons in the human brain and need far less human input. Their forte is learning, which they do by analyzing huge amounts of input data or rules such as the rules of chess or Go. They have had notable success in recognizing faces and patterns in data and also power driverless cars. The big problem is that scientists don’t know as yet why they work as they do.
But it’s the art, literature and music that the two systems create that really points up the difference between them. Symbolic machines can create highly interesting work, having been fed enormous amounts of material and programmed to do so. Far more exciting are artificial neural networks, which actually teach themselves and which can therefore be said to be more truly creative.
Symbolic AI produces art which that is recognizable to the human eye as art, but it’s art which that has been pre-programmed. There are no surprises. Harold Cohen’s Aaron AARON algorithm produces rather beautiful paintings using templates which that have been programmed into it. Similarly, Simon Colton at the college of Goldsmith’s College in the University of London programs The Painting Fool to create a likeness of a sitter in a particular style. But neither of these ever leaps beyond its program.
Artificial neural networks are far more experimental and unpredictable. The work springs from the machine itself without any human intervention. Alexander Mordvintsev set the ball rolling with his Deep Dream and its nightmare images spawned from convolutional neural networks (ConvNets) and that seem almost to spring from the machine’s unconscious. Then there’s Ian Goodfellow’s GAN (Generative Adversarial Network) with the machine acting as the judge of its own creations, and Ahmed Elgammal’s CAN (Creative Adversarial Network), which creates styles of art never seen before. All of these generate far more challenging and difficult works—the machine’s idea of art, not ours. Rather than being a tool, the machine participates in the creation.
In AI-created music the contrast is even starker. On the one hand, we have François Pachet’s Flow Machines, loaded with software to produce sumptuous original melodies, including a well-reviewed album. On the other, researchers at Google use artificial neural networks to produce music unaided. But at the moment their music tends to lose momentum after only a minute or so.
AI-created literature illustrates best of all the difference in what can be created by the two types of machines. Symbolic machines are loaded with software and rules for using it and trained to generate material of a specific sort, such as Reuters’ news reports and weather reports. A symbolic machine equipped with a database of puns and jokes generates more of the same, giving us, for example, a corpus of machine-generated knock-knock jokes. But as with art their literary products are in line with what we would expect.
Artificial neural networks have no such restrictions. Ross Goodwin, now at Google, trained an artificial neural network on a corpus of scripts from science fiction films, then instructed it to create sequences of words. The result was the fairly gnomic screenplay for his film Sunspring. With such a lack of constraints, artificial neural networks tend to produce work that seems obscure—or should we say “experimental”? This sort of machine ventures into territory beyond that of our present understanding of language and can open our minds to a realm often designated as nonsense. NYU’s Allison Parrish, a composer of computer poetry, explores the line between sense and nonsense. Thus, artificial neural networks can spark human ingenuity. They can introduce us to new ideas and boost our own creativity.
Proponents of symbolic machines argue that the human brain too is loaded with software, accumulated from the moment we are born, which means that symbolic machines can also lay claim to emulating the brain’s structure. Symbolic machines, however, are programmed to reason from the start.
Conversely, proponents of artificial neural networks argue that, like children, machines need first to learn before they can reason. Artificial neural networks learn from the data they’ve been trained on but are inflexible in that they can only work from the data that they have.
To put it simply, artificial neural networks are built to learn and symbolic machines to reason but with the proper software they can each do a little of the other. An artificial neural network powering a driverless car, for example, needs to have the data for every possible contingency programmed into it so that when it sees a bright light in front of it, it can recognize whether it’s a bright sky or a white vehicle, in order to avoid a fatal accident.
What is needed is to develop a machine that includes the best features of both symbolic machines and artificial neural networks. Some computer scientists are currently moving in that direction, looking for options that offer a broader and more flexible intelligence than neural networks by combining them with the key features of symbolic machines.
At Deep Mind in London, scientists are developing a new sort of artificial neural network that can learn to form relationships in raw input data and represent it in logical form as a decision tree, as in a symbolic machine. In other words, they’re trying to build in flexible reasoning. In a purely symbolic machine all this would have to be programmed in by hand, whereas the hybrid artificial neural network does it by itself.
In this way combining the two systems could lead to more intelligent solutions and also to forms of art, literature and music which that are more accessible to human audiences while also being experimental, challenging, unpredictable and fun.