Facebook’s Leaked Censorship Rules Reveal The Challenge Of Connecting The World


It’s easy to forget how miraculous social media is. Even twenty years ago, it would have been effectively impossible to communicate with people around the world as easily and efficiently as we do today, whether you wanted to discuss serious political issues or, more likely, share cat pictures. But while social media, particularly Facebook, brings the world closer together, it has also had to face the stark challenge of fitting so many different cultures and perspectives into the same space. And the company’s struggles to define what’s hate speech and what’s just ignorant, what’s an extreme but not a violent viewpoint and what’s advocating for terrorism, bring to the fore just how hard of a situation they find themselves in.

ProPublica had a long article yesterday on how Facebook drafts the policies that determine what posts get deleted and what posts get kept, and the process behind it. It’s a fascinating read on a number of levels, and there’s much that will likely both interest and enrage people. But what’s most arresting is that Facebook, and its rapidly expanding army of censors, is out of their depth. ProPublica points out early on that Facebook’s approach literally protects “white men” over “black children,” which sounds staggeringly tone deaf on the face of it, but the explanation is somehow even more ridiculous: Facebook sorts people into “protected groups” and “subsets,” so, for example, saying women can’t drive wouldn’t be, to their rules, an example of misogyny, because you’re complaining about women drivers, who are a subset of women.

From the article:

The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.

That Facebook’s approach is a mess is hardly surprising — these are insanely complicated waters to navigate — but it’s perhaps to be expected. Facebook is run by website engineers and software developers, and they’re struggling to impose a logical and nuanced framework on human behavior. Which… duh. Why should they succeed in a task that humanity’s been trying to achieve, without success, for centuries?

It’s no secret that people across a divide can have very different viewpoints on the same issue, and there’s been substantial debate on how much blame Facebook’s algorithms deserve for building the filter bubbles that users of the site experience, versus our own obligation to pop said bubbles. The hate speech situation is different though. In this case, we’re not talking about news feeds mitigated, but raw, unfiltered emotion and belief. Beliefs are tougher to handle than lies trying to pose as truth.

Where does ignorant speech become hate speech? Who gets to decide what is acceptable for a person’s personal page? How do we create standards that are applicable across cultures? Across the whole freaking world? That’s Facebook’s self-appointed job. No wonder it’s struggling.

Getty Image

The truth is, Facebook was simply never engineered to deal with these questions. It’s easy to forget the site started as a simple way for college students to stay in touch across campus, and that everything else grew out of that. Mark Zuckerberg no more considered his goofy website as a potential platform for hate speech than Amazon thought it’d be selling you talking hockey pucks. And now the site finds itself at the very center of a key question of the future: How are we going to reconcile our broad range of cultures? How do we set a standard of civil behavior when we’re all in the same virtual ballroom?

It’s undeniable that Facebook doesn’t have a great answer, and it could do better. Right now, if a post is removed, Facebook doesn’t explain why. It should, if for no other reason than to see if its guidelines make sense to users. There is going to be some back and forth here. Some growing pains. But communication needs to flow openly. Like it or not, Facebook has to stop viewing its post removals as an internal opaque activity, and start viewing it as a conversation with its users, however awkward that may be. Otherwise, Facebook’s dream of an open, connected world might just become a cloistered, secretive nightmare.

×