Skip to main content

Unmasking the Truth in Caricature

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


I had an interesting experience with Facebook's face-recognition system for auto-tagging photos recently. Essentially, it misidentified a person in my photos. I didn't catch the error until I posted the photos and, of course, Facebook had already helpfully notified the person that he had been tagged. What followed was damage control for the 21st-century: de-tagging, emailed explanations (“Mea culpa. Please disregard.”), and re-tagging. Why re-tagging? Because that was the whole point of the exercise, obviously.

The auto-tagging feature isn't something I typically use, but I figured that a good use of not-feeling-well-curled-up-on-the-couch-time was uploading photos, and perhaps I wasn't paying as close attention as I should have been. In any event, facial recognition software seems like something that should be relatively easy—you can change your hair or add glasses if you’re trying to be incognito, but you can’t hide the underlying structure of your face, right? And that should be something fairly straightforward for a computer program to match. But as a recent Wired article discusses, there are nuances to our faces that technology just can’t account for—we code faces very much the way caricaturists do.

Essentially, we formulate a general image of what a face looks like within our cultural parameters (i.e., a variation of “two eyes above a nose that’s above a mouth”). We distinguish between individuals by identifying traits that differentiate their faces from the norm we’ve established. For example, we focus on broad forehead or a receding hairline or a button nose or a wide mouth and we exaggerate them creating a link between the person and the trait—we caricature the individual.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The article discusses the ways in which caricature creates a “truer” picture of an individual, or rather, a more recognizable one. If we’re all operating on the same general norm, then we’re more likely to identify the same differentiating features when we create our own individual caricatures of the person. That is, though you and I may have a different overall image in mind of a person, we have the same general sense of what features identify that person as that person.

Facial recognition software sort of gets this. It works to identify general traits that match a norm, and can target obvious standard anomalies, but it can’t pick up on the subtleties that our brains can identify. The exaggerations by the latter work as identifiers in almost any context—unless the subject is a master/mistress of deception and adept at hiding identifying features. In the case of my Facebook experience, there are definitely certain general features that the mis-tagged person shares with the pictured person: a broad forehead, round cheeks, a button nose. Given we mutter anxiously about AI overlords, these sorts of mistakes reflect the limitations of perception—the questions are whose, and whether it matters.

If caricature could be technologically harnessed, it would be a powerful tool in digital security and safety. But how would our own biases color the perceptions generated by the software?

Caricature emphasizes the things that differ from the norms we create for appearance. It can reveal a great deal about how we see people of different races and ethnicities—and hamper our ability to see these people as individuals. The idea is that when we encounter groups of people who don’t match our established norm, we lump them together with the same exaggerations and we stop seeing them as individuals. We don’t work as hard to pick out the nuances that make them individuals, which gives rise to the idea that members of other races and ethnic groups look alike. We otherize them, and reinforce the stereotypes that are linked to them.

If we’re coding software to see people as we see them, then will the representations that we generate reflect this tendency to otherize? And if they do, how does that help us in terms of safety and security, which are primary uses for facial recognition software (the importance of Facebook photo tagging aside)?

On the other hand, human bias is, well, human bias. The “faults” inherent to software—weaknesses that we are already trying to minimize with research that emphasizes specific traits that may be important to identification—may very well circumvent these tendencies. The research being done seems to emphasize less comparison to a socially and culturally constructed norm and instead focuses on individual traits—the area of your forehead, the gap between your eyes, and the length of your nose, which, as we mused at the beginning, seems to be the fundamental means of identification.

As we move increasingly toward digital means, facial recognition could supplant our passwords. And in the far off realm of the future, they might be our primary form of ID—imagine a cop scanning your face to write you a ticket. Or, one supposes, you could wind up with a ticket for someone else. How imperative is it that we harness the nuances of facial recognition? And can we escape the biases that we generate from these images?

 

Image Credit: Max Beerbohm taken from Caricatures of Twenty-five Gentlemen (1896), Public Domain.