Artificial Intellegence Is Racist Because Its Teachers Are Humans

Shutterstock

Silicon Valley would like to believe that algorithms and artificial intelligence are fundamentally objective. But whether it’s Microsoft’s chatbot becoming a foaming racist after coming in contact with the internet or legal algorithms showing racist bias despite enormous effort made to leave race out of the equation, it’s been shown that our horrible attitudes easily transfer to machines. The good news? We now know why our AI is so biased. The bad news? It means there’s a long road to truly objective AI.

Princeton researchers theorized that the problem was that computers were surfacing unconscious bias. We all have unconscious bias, but, unlike computers, we also have self-awareness. We can stop and think about what we’re going to say and do and develop an understanding of how others might take our actions. Computers don’t have those brakes, but do they need them?

To see if that was a problem, the Princeton team adapted a psychological test called the Implicit Association Test, or IAT, to text found online. The IAT is fairly simple: people are asked to put words into categories as quickly as possible. The longer each word takes, the less they associate that word with that category. The theory was that the closer two words are together, the more tightly associated they are as a social bias. The researchers also made sure to look at concepts; i.e. if somebody thinks cats are inherently lazy, the algorithm designed would count “lazy kitty” and “useless cat” as more or less the same idea. To get a more hands-on perspective, you can try several versions of the IAT here.

The revised test, called the Word-Embedding Association Test or WEAT, has been shown to work because it produces clear, simple results; for example, women are closely tied, in the WEAT, to concepts of childbirth. That means if it unearths a bias in text, that bias is probably held by humans. In other words, as we feed data to robots, and as they learn to grasp concepts, they pick up our biases. The Princeton team believes they might be able to track down biases we don’t even realize we have.

So, what’s the way forward? Much like you have to take Grandpa aside and politely explain that nobody finds that joke about Caitlyn Jenner funny, humans will need to step in, check the work of AI for bias, and inform the AI of the problem so it can learn. In other words, we’ll have to confront our own biases, deal with them, and then teach computers to do the same. It would have a certain poeticism, if the problem being revealed weren’t so stark.

×