How Microsoft’s Twitter Experiment Became A Racist Nightmare

Yesterday, Microsoft debuted Tay, one of a long line of unusual experiments. Tay was a program designed to talk to users and learn from how they communicated with it. In other words, Tay was sent to the internet to learn how to talk to human beings. As even a cursory visit to the internet will tell you, this was a poorly considered idea, and sure enough, by midnight last night, Tay was shut down amid tweeting about genocide, racist slurs, and other psychological garbage. So what happened?

Microsoft had a fairly reasonable goal here: They wanted to develop better “conversational understanding” for their products. Part of the reason computers and humans don’t interact well is that humans tend to communicate obliquely while robots think literally. If you told a robot to go to hell, it would duly head to Michigan. The only way to deal with this is through brute force — put robots out there, and have real people talk to them.

Tay’s responses are created, essentially, by an enormous anonymized database of conversations. If you ask it “How are you?” it will scan the database, analyzing all the responses to “How are you?” and coming up with the most common response. The more data it has to draw from, the more natural-seeming the response will be. The problem, which Microsoft should have seen coming, is that, first of all, it was aimed at teenagers, aka the social group who thinks naming a soda Hitler Did Nothing Wrong is high comedy. Secondly, Tay had no filters, so if enough people told it that the response to “How are you?” was a racist slur, eventually Tay would offer up the racist slur.

Which is, more or less, exactly what happened. It’s kind of like teaching a toddler a dirty word — they don’t have any context for it, they just hear the word, see you laughing, and keep repeating it to get a response. Tay is currently in the shop now, and Microsoft is, obviously, rather mortified one of its products went so wrong. But this is, in a way, a reminder that as bad as computers can be, they learned it from humans.

(Via CNN Money)

×