"Happy" and "Sad" robots walking as part of a study to assess how receptive humans are to potential robot co-workers (Source: doi: 10.3389/fpsyg.2015.00204)

Each year it seems a little less like science fiction to ask your phone for advice about local chinese food or trust your car to get you to a new location. Maybe you even wish you had a robot who could clean your house or fix your electronics. With the popularity of programs like Siri and movies like Her, it is clear the scientific realm of Human-Robot Interaction (HRI) is moving more into the public eye.

But just as interacting with robots has started to feel mundane in some situations, almost everyone has also had experiences where trusting technology made them feel uncomfortable. This discomfort can range from the simple preference that certain decisions be made by humans rather than machines to the feeling that a robot has ventured a little too close to being human. Whatever the reason, scientists are working hard to understand why humans experience discomfort or distrust with robots. They know that robots will be playing ever-larger roles in our lives and workplaces, and paving the way for better human-robot interactions will be important for everyone.

At the simplest level there is the question of whether there are situations in which we trust the algorithms – the decision-making and problem-solving procedures – of machines more than those of humans. One set of researchers trying to understand this question provided people with a range of experiences in which they compared human decisions to machine algorithms in determining the future potential of business school applicants.

In the study the humans were consistently out-performed by the machine algorithms – the machines were better able to identify which applicants would become the highest performing students over time – but the participants were more willing to trust the human decisions anyway. These researchers claim that people experience something called algorithm aversion: avoiding or distrusting a machine algorithm after seeing it make a mistake.

Now you might think that this sounds natural; of course you would avoid something that you know makes mistakes. But the people in the study saw both the humans and the machine algorithms make mistakes, and the human errors were consistently much larger than those made by the machines. It didn’t matter. While people in the study were relatively willing to overlook or justify sometimes large errors by human decision-makers, even small mistakes on the part of the machine algorithm made people unwilling to trust it again.

Example of humanoid robot that humans might find attractive, or even uncanny, depending on its behavior (Source: doi: 10.3389/fpsyg.2015.00204)

The researchers in this study provide the excellent example of finding a way through bad traffic. Imagine you are stuck in traffic and decide to try a new route that you think might be quicker. You find out afterwards that it was even longer and recognize that you made a mistake – but you probably wouldn’t conclude that your decisions could never be trusted again. But if a GPS suggested a route that ended up taking longer, many people might simply conclude that the GPS is not trustworthy.

More research will be necessary to tease out precisely why humans have such an aversion to decisions based on machine-algorithms, but this study provides important insight into some of the reasons. Participants felt that the machine-algorithms were better at avoiding obvious mistakes, comparing attributes, and consistency. But they felt that humans were better at improving with practice, learning from mistakes, and finding under-appreciated candidates. With advances in machine learning, it will be important for us to find a way to stop expecting perfection from machines, and certainly not lose complete trust in them at the smallest mistake.

These ideas are of such significance that the recent World Economic Forum in Davos, Switzerland featured a session asking this question: Human –vs- Artificial Intelligence: Will machines make better decisions than humans?

Even if people were to accept that robots may have access to better algorithms than humans, the next step would be moving from simply the decision-making of machines to the interaction with robots in daily life. This is the shift from ATMs and assessing job applications to becoming co-workers or care providers in a variety of settings.

Part of this step depends on the kinds of jobs robots would even be capable of doing. Many of the robots we currently interact with depend on machine cues, like pushing a button. But what about settings in which the cues are significantly more complicated? Research is already being done with robots capable of distinguishing body positions and eye contact in complicated settings, like determining who is standing at a bar and who is interested in placing an order.

It may be exciting to imagine a robot server in a restaurant or greeting you as you enter a hotel, but now imagine that something about the robot made the encounter uncomfortable. Maybe the robot keeps telling you good morning even though you haven’t left the lobby. Maybe the robot has a disturbing way of moving around. Maybe the robot looks a little too much like your mother, or your cat.

Representation of the "Uncanny Valley" as presented in Destephe et al (Source: doi: 10.3389/fpsyg.2015.00204)

In the world of Human-Robot Interactions, there has been a hypothesis since the 1970s describing this drop in comfort as “the Uncanny Valley.” Though humans feel more familiar and comfortable with robots if they are more human-like, there is a threshold where that familiarity becomes “uncanny” and our comfort drops when something is just not human. Crossing that threshold to the point of being hard to distinguish from humans, our comfort shifts back from the uncanny and into the familiar.

Researchers are trying to understand the causes and potential solutions for this discomfort in many areas, including the idea of providing a robot with a complex occupation instead of a particular task. Would you feel uncomfortable about a robot collecting trash? What about driving the truck? Monitoring an assembly line for defective parts? What about determining the shifts of the people working on that line? Would you trust a robot to work in that emergency response as part of an ambulance team? What about as a police officer?

One recent study asked participants to watch videos of a humanoid robot walking around and mimicking a number of human emotions. The participants then answered a series of questions about how they felt about the robot and whether they could imagine that robot with an occupation.

Comparing whether people felt it would be acceptable for a robot to have an occupation with whether they found that robot to be human-like, eerie, or attractive (Source: doi: 10.3389/fpsyg.2015.00204)

Of all the qualities that the researchers tracked, the greatest correlation between a single quality and whether the participants could imagine the humanoid robot having an occupation was attractiveness. Considering that there were rankings of eeriness, this seems surprising. The participants who found the robot more attractive could imagine it with an occupation more easily, regardless of how “eerie,” “uncanny,” or “freaky” they found it.

While researchers will need to look into this more deeply, the authors of this most recent study suggest it is interesting to think about this in the context of attractiveness research in humans. Some studies have found that when people are assessing one another, they are more likely to rate others as more capable if those people are more attractive. This may be something to take into account when designing robots who will be working side-by-side with humans in the future.

The next time you interact with a piece of technology, maybe ask yourself how much you trust it and why. Is it because it is attractive? Because it has never let you down? With more research, science may be able to provide a more complete answer.


Dietvorst BJ, Simmons JP, and Massey C (2014) Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err. Journal of Experimental Psychology 144:11. doi.org/10.1037/xge0000033


Loth S, Huth K and De Ruiter JP (2013) Automatic detection of service initiation signals used in bars. Front. Psychol.4:557. doi: 10.3389/fpsyg.2013.00557

Destephe M, Brandao M, Kishi T, Zecca M, Hashimoto K and Takanishi A (2015) Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner. Front. Psychol. 6:204. doi: 10.3389/fpsyg.2015.00204