Skip to main content

Even Kids Can Understand That Algorithms Can Be Biased

Alexandria Ocasio-Cortez is right: machines can lead to racist outcomes

Three children working in front of a computer
Credit:

Getty Images

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Earlier this month, Alexandria Ocasio-Cortez suggested that algorithms, because they are designed by humans, can perpetuate human biases. While some people are still resistant to the idea, it has been widely accepted for some time by experts in the field. There is an entire Machine Bias beat at ProPublica, as science journalist Maggie Koerth-Baker pointed out. I wrote the following article about algorithmic bias and some ways computer scientists are trying to combat it for the November/December 2017 issue of Muse Magazine, whose audience is kids around 9-14 years old. I have added some links to further information in this version.

Say you want to apply for a new job. You might start by writing a resume. That’s a document listing your name, education, and qualifications. 

But studies show that people evaluate the same resume differently if the name at the top is Jennifer instead of John, or Lakisha instead of Laurie. Whether we intend to or not, humans have biases. These are likes and dislikes that make it hard to see our world accurately and fairly.  People are more likely to want to hire John than Jennifer or Laurie than Lakisha for many jobs.  These folks would not describe themselves as sexist or racist, but they unconsciously favor some groups of people over others.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Biases can be bad for everyone. If people making hiring decisions allow their biases to influence them, they obviously hurt the people they discriminate against. But they also hurt themselves. Their company will miss out on good employees because of human bias. And companies with more diverse employees often perform better than companies with employees who are very similar to one another.

So why not get computers to hire people instead? A pile of metal, or the zeroes and ones that make up its programming, can’t be racist or sexist, right? Problem solved!

Writing the Recipes

Not so fast. Computers don’t have free will or feelings, but a growing number of computer scientists, data scientists, and other researchers are drawing attention to the fact that algorithms can reinforce biases in society even without a programmer inserting any clearly racist or sexist rules. “A lot of people think that because algorithms are mathematical they’re automatically fair,” says data scientist Cathy O’Neil. “That’s just not true.”

An algorithm is kind of like a recipe. It’s a set of instructions that tells a computer how to answer a question or make a decision. (Sadly, the result will not be as tasty as the end result of a recipe.) Artificial intelligence is one specific type of computer algorithm. “Anything that tries to make computers act like humans” is artificial intelligence, says Suresh Venkatasubramanian. He is a computer scientist at the University of Utah. Computer scientists, software engineers, and programmers work on different aspects of computing. These aspects include artificial intelligence, machine learning, and algorithms. Computer scientists generally concentrate on more theoretical aspects of the field, while software engineers design programs for computers to run. Programmers do the hands-on work of creating algorithms. If algorithms are recipes, programmers are the cookbook authors that write them down.

One popular artificial intelligence technique is called machine learning. “It’s an algorithm for making algorithms,” says Venkatasubramanian. A machine learning algorithm looks at data about how decisions were made in the past and uses it to make future decisions.

For example, when you go to Amazon or YouTube and browse for the next book you want to read or video you want to watch, you see a list of recommendations. Those recommendations are the result of a machine-learning algorithm that has looked at millions, if not billions, of clicks and figured out what books or videos people with your preferences tend to choose.

That sets the stage for potential problems. “It imitates whatever used to happen,” says data scientist Cathy O’Neil. “Until we have a perfect society, we might want to be careful about that.” With Amazon or YouTube recommendations, these biases seem fairly harmless. You might get book recommendations that don’t appeal to you or miss out on a video you’d love. But biased algorithms can have more clearly negative consequences as well.

Skewed Inputs

In the early 1980s, a medical school in London, England, started using an algorithm to do the first round of screening for admissions. They had trained the algorithm on data from the previous decade. In that time, people making admission decisions had discriminated against women and non-European people. The algorithm, which was designed to mimic the humans’ choices, preferred men to women and people whose names sounded European to those with non-European names. “Much more subtle things are happening nowadays, but they’re still happening,” says O’Neil.

As a more modern example, some facial recognition programs are worse at identifying the faces of dark-skinned people than those of light-skinned people. In that case, the problem was in what data was used in the first place. Probably without realizing it, programmers had trained the algorithms using mostly light-skinned people’s faces. The algorithms then stumbled when presented with people with a wider range of skin tones. In one case, an image recognition program labeled a picture of two people as “gorillas.” This is an offensive racial slur. The computer program didn’t know that, though. Embarrassed engineers quickly fixed the error. Even without any ill intent on the part of the programmers or the algorithms themselves, people were and are hurt by biased algorithms.

There are several ways programmers who write algorithms can try to combat bias. In the case of facial recognition algorithms, the programmers could have used a broader range of faces to train the algorithm. Algorithm designers can try to make sure their data truly represents the population it relates to. If you’re going to be using it to classify pictures of people, it’s no good if it only works on light-skinned people.

A medical school might need a different fix. The problem there was not that the algorithm had too little data but that the data it had reflected historic biases. One option would be to tell the algorithm to ignore gender or last name, but even that might not solve the problem. The algorithm could pick up on factors that were related to gender or national origin. For example, it may start discriminating against people who went to a school that has more women than men or those who live in neighborhoods with a large proportion of immigrants.

Toward Fairness

Venkatasubramanian has a slightly surprising suggestion as well: tell the algorithm to be more random. “If you’re random all the time, you might make bad decisions,” he says. “But every now and then you should do something extremely random.” He and his colleagues have recently done research showing that machine learning algorithms that incorporate a small amount of randomness can be more fair than algorithms that try to mimic their training data strictly. In a sense, he says, “You want the algorithm to recognize that it can’t be certain.” [To read a paper from Venkatasubramanian and coauthors about techniques that can combat algorithmic bias, click here.]

Venkatasubramanian says the takeaway is that we should not be overly confident in algorithms. Both awareness of potential biases and skepticism about the fairness of algorithms we use are needed. “Algorithms aren’t all bad. The idea that we can transcend our human subjectivity is a good thing,” he says. But, he adds, “To put all our faith in machines won’t work. With the appropriate amount of doubt, we might be able to do better.” Instead of reinforcing society’s existing biases, algorithms that incorporate his and others’ suggestions could actually help us make decisions more fairly. Just like we thought they would in the beginning.