Skip to main content

Studies Find Much to Measure in Dog Faces

Dog facial expressions can be measured, and there's more than one way to do it

What's on this dog's face?

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


To Dr. Lucy Asher, the image below is a dog face.

Of course, she can see dog faces the same as you or I, she just chooses to see them differently. Asher, a senior research fellow and group leader of Asher Behaviour Lab (@asherblab) at Newcastle University, is starting a research program to objectively investigate dog facial expressions. Her research questions are on the minds of many dog lovers: "Can facial expressions give us an insight into an animal’s mind or their feelings? Are facial expressions consistent between individuals? Are they used for communication?" Dots and arrows offer a way into these questions.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The dot-and-arrow image above is actually a vector map of the dog face below. Using facial recognition software—similar to what’s used at airport security—Asher and her team track key dog facial features, she explains over email. This machine vision technique produces vector maps, allowing the researchers to visualize "how much and in which direction parts of the face move within a given time."

The vector maps are not intended to make abstract art for the science-y dog lover, although I imagine that a side project selling dog face vector maps would be profitable. 'Dog Abstracted,' if you will. Instead, Asher and her team aim to compare vector maps across different dogs to see which facial movements are shared. 

"If different dog faces make the same sorts of coordinated movements, we can start to define these facial expressions." Asher suggests. "Facial expressions, we hypothesise, are either used directly for communication and/or are associated with different 'emotional' states. By examining the contexts in which different expressions occur, we can start to recognise how facial expressions are used by dogs. If facial expressions are associated with different emotional states, then we can use them as a window into a dog’s state."

Asher’s outside-in approach to studying emotions has a long history, particularly in humans. In 1969, anatomist Carl Herman Hjortsjö suggested that contractions of facial muscle groups underlie emotional expression in humans, and he studied 23 facial motion units in the human face.

More recently, Paul Ekman and Wallace Friesen expanded on Hjortsjö's work to develop what’s now known as the Facial Action Coding System (FACS), where dynamic muscle movements in the face correspond to displayed emotion. Even TV shows have capitalized on the idea that the face holds all types of tells. Lie to Me, starring the wonderful Tim Roth as a deception researcher, pulled from Ekman’s research, with Ekman serving as Scientific Advisor. And if you're looking for a time suck, there's a surprisingly addictive FACS visual guidebook of the main action units in the human face as well as the action units contributing to different emotions. My favorite action unit combination is 9 + 15 + 16.

Human Facial Action Coding was just the beginning. The University of Portsmouth Psychology Department lists DogFACS, CatFACS, OrangFACS, GibbonFACS, EquiFACS, MaqFACS, and ChimpFACS, all based on the facial anatomy of the species in question. FACS doesn't "detect emotions per se," explains the DogFACS website. Instead, users of each FACS must study and learn the action units corresponding to muscle movements in the face. Once users display proficiency and are certified, the system "can be applied to investigate communication and emotion in dogs through the analyses of dog's facial behaviour."

FACS increasingly appears in studies of dog and cat behavior, cognition, and welfare. Using DogFACS, Bridget Waller from the University of Portsmouth and colleagues suggest that the more a dog at a shelter shows the facial movement inner brow raise—AU101 from the coding system—the less time the dog will stay at the shelter. This particular facial movement could be perceived as "cute" or "sad" and increase the desire to care for the dog displaying it. My translation: Puppy dog eyes = Melting human heart.

It's all in the inner brow raise, on the right (B). Credit: Waller et al. 2013 Figure 1

A recent study from Valerie Bennett and colleagues using CatFACS found that the blinking and half-blinking facial action units were associated with fear. Another set of facial actions was associated with frustration, including "hissing, nose-licking, dropping of the jaw, the raising of the upper lip, nose wrinkling, lower lip depression, parting of the lips, mouth stretching, vocalisation and showing of the tongue." The takeaway: it's time to stop joking that cats lack facial expressions. They've got 'em, so let's pay attention. 

Asher argues we shouldn't put all our chips in FACS. Developing the FACS tool for a new species is a lengthy and painstaking process, she offers, and learning to code according to each species' FACS system is also time-consuming. We could also be limited by human perception: "As humans, we are biased towards recognising facial movements that are similar to our own, and thus it is more likely that a human studying faces will miss species-specific facial movements (if these exist). Us humans might also miss more subtle signs of facial movements, such as asymmetry."

Which brings us back to what I like to call the dot-and-arrow technique. "Using sensor technology and mathematical models, we can measure movements of the face directly," offers Asher. "This means we aren't reliant on humans to detect facial expressions. Both subtle and more extreme facial movements can be detected, and the context in which these appear can be noted." She envisions generating algorithms that provide insight into whether a dog is afraid, in pain, or even likely to display aggression.

FACS and the dot-and-arrow technique undoubtedly complement one another. "Since DogFACS already exists, we can compare our findings with the FACS approach and check to see what extra our vector mapping approach gives us," Asher adds.

I'm particularly interested to see how sensor technology and mathematical models deal with the funky faces that are dog faces. With the increase of human meddling and artificial selection, dog faces are anything but homogenous, and studies-to-date seem to focus on dog faces that are incredibly flexible and expressive, with highly visible features. For example, a recent study on dog facial expressions used images of a five-year-old Belgian Malinois named Mal. Would differences in dog cranial and muzzle length affect vector mapping or FACS coding? Can the same information be gleaned from a dog face layered in massive skin folds as a dog face that's, by comparison, clean and clear?

The study of dog facial expressions is an ongoing process, but we aren't entirely in the dark when it comes to what's on a dog's face. Applied animal behavior professionals are attentive to the nuances of dog expressions, in the face and overall body. People wanting to learn how to objectively observe dog behavior will enjoy ASPCApro resources like Canine Body Language from Heather Mohan-Gibbons, ASPCA Director of Applied Research and Behavior. The webinar includes a useful Canine Body Language handout and Canine Communication slides

References

Bennett V, Gourkow N, Mills DS. (2017) Facial Correlates of Emotional Behaviour in the Domestic Cat (Felis catus). Behavioural Processes, 141, 342-350.

Bloom T, Friedman H. (2013) Classifying Dogs' (Canis familiaris) Facial Expressions From Photographs. Behavioural Processes, 96, 1-10.

Waller BM, Peirce K, Caeiro CC, Scheider L, Burrows AM, McCune S, et al. (2013) Paedomorphic Facial Expressions Give Dogs a Selective Advantage. PLoS ONE 8(12): e82686. https://doi.org/10.1371/journal.pone.0082686