Skip to main content

Robots Evolve to Look Out for Their Own

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


A robot must protect its own existence.

This mid-20th-century dictate to the robotic clade from science fiction author and biochemist Isaac Asimov seems cleanly in step with Darwinian theory and the biological world of survival of the fittest.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


But as scientists continue to witness animals and other organisms habitually sacrificing themselves for the greater good of their colony or kin, the picture of self-interested behavior in the natural world has become murkier. Might robots also learn to cooperate for the betterment of their own kind?

They already have. Meet the Alice bots. Some robots have been programmed to help each other out, but these automatons have "evolved" over generations to be more helpful—that is, to like robots.

The version of this behavior in animals is known as Hamilton's rule of kin selection. Put forth by biologist W. D. Hamilton in the 1960s, it aimed to explain why organisms—from ants to humans—would sometimes help others at their own expense. This altruistic impulse—to spend time, energy and resources on others—is thought to be especially strong toward those who might help pass along our own genes. But just how close of kin does a person have to be for us to be compelled, under Hamilton's rule, to help out?

Given the complexity of animal environments and actions and their relatively slow evolution, it's been difficult to actually demonstrate Hamilton's rule in organisms.

Cue the robots.

Researchers in Switzerland developed a band of small, rolling robots equipped with sensors and their own "genetic code"—a unique string of 33 1's and 0's functioning as individual "neurons" to determine sensor use and behavior—and tasked with foraging for small "food" objects and pushing them to a designated area. Those robots that failed to collect the objects were weeded out of the "gene pool" by the research team, whereas those that were successful could choose whether to collect the food object for themselves or share it with another robot.

"Over hundreds of generations," the researchers concluded, "we show that Hamilton's rule always accurately predicts the minimum relatedness necessary for altruism to evolve," they wrote in a new paper describing the results, published online May 3 in PLoS Biology. The levels of relatedness that the researchers tested included full clones as well as the digital equivalent of siblings, cousins and non-kin.

"This study mirrors Hamilton's rule remarkably well to explain when an altruistic gene is passed on from one generation to the next, and when one is not," Laurent Keller, a biologist at the University of Lausanne and co-author of the new study, said in a prepared statement.

Each test consisted of 500 generations of eight robots. To mimic what might happen in nature, the successful robots from each generation were "randomly assorted and subjected to crossovers and mutations…forming the next generation," the researchers explained. And although the 33 "genes" were randomly distributed at first, "the robots' performance rapidly increased over the 500 generations of selection," the researchers noted. And along with acuity at collecting the food, "the level of altruism also rapidly changed over generations," with those robots around more closely "related" individuals becoming the most altruistic.

Aside from demonstrating Hamilton's rule in a quantifiable—if artificial—system, the work also shows that "kin selection does not require specific genes devoted to encode altruism or sophisticated cognitive abilities, as the neuronal network of our robots comprised only 33 neurons," the researchers noted in their paper.

"We have been able to take this experiment and extract an algorithm that we can use to evolve cooperation in any type of robot," Dario Floreano, a robotics professor at the École Polytechnique Fédérale de Lausanne and co-author of the new study, said in a prepared statement. Any type of robot? Does that mean it's time to run for the hills?

Nope—should the bots decide to discard the other two of Asimov's laws for robots (obeying humans and not harming them), they'll surely be able to find us there. "We are using this altruism algorithm to improve the control system of our flying robots, and we see that it allows them to effectively collaborate and fly in swarm formation more successfully."

Image courtesy ofEPFL/Alain Herzog