Skip to main content

The Robot Will See You Now

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Editor's note: When we think of empathy, we don’t usually think of machines. But computer scientist Louis-Philippe Morency and others are getting closer to building machines that can interpret not just the words people say, but also the emotion that informs their meaning. Oliver Cann at the World Economic Forum caught up with him ahead of the Annual Meeting of the New Champions, which is taking place in Dalian, China this week, to talk about artificial intelligence and healthcare. Morency, who is Assistant Professor at the Language Technology Institute, School of Computer Science, Carnegie Mellon University in Pittsburgh, is also a WEF Young Scientist.

Here are excerpts:

What is going on at the intersection of artificial intelligence and healthcare?
It’s very exciting right now. The area I’m most interested in is how we can use computer science to help clinicians and healthcare providers recognize mental health conditions such as depression, anxiety or post-traumatic stress disorder through the non-verbal communication of the patient. We call this multimodal machine learning.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


How close are we to reaching this goal?
Clinicians have been assessing patient’s nonverbal behavior subjectively, but now we are offering ways to do it objectively. We’re beginning to develop algorithms that recognize communications such as facial expressions, posture, gestures and what is called paralanguage—emphasis and quality in what people say—with a high degree of accuracy. This is still a long-term goal, but in the past five to 10 years we’ve gone from talking about the concept to actually showing concrete examples. That was the hard part, now we have the attention of the medical community and are taking the next baby steps towards actually applying the science to mental health treatment and assessment.
 
Are there any other applications of this technology?
Mental health is very much a long-term goal but I believe there will be other killer apps in the short term. We’re already working with Internet companies to assess user engagement and sentiment when interacting with online adverts and videos: I think this sector is really going to take off. It’s also going to do well in business, where we can identify nonverbal behaviors correlated with positive negotiation by looking at what we call human communication dynamics. And I believe that this technology will be central in the development of the new generation of Massive Open Online Course.

Aside from health, how else can your work be put to good use?
We’re only at the beginning of the telemedicine revolution, not just in terms of its use in remote and developing countries but also in terms of enabling specialists to consult more effectively in the developed world. It’s quite feasible that multimodal machine learning in the future could be used to help assess disorders and even diseases remotely, perhaps even in pandemic situations such as the recent Ebola outbreak. But I don’t think computers will ever replace doctors.

Aside from your area of expertise, what other areas of science (or recent breakthroughs) are particularly interesting to you right now?
I am particularly interested in recent progress in neuroimaging where technologies such as functional MRI are allowing us to get a better understanding of the neural activities related to perception and cognition. The work of Uri Hasson from Princeton University is a great example in this direction.