By Expecting Users To Spot Fake News, Is Facebook Passing The Buck?

Senior Contributor
04.10.17

Getty Image

Facebook is very concerned about fake news. And to be fair, it’s taking steps to eliminate ads and combat the problem. But we’ve heard this before, and Facebook’s latest method appears to be educating users and putting some of the onus on them.

The latest initiative puts a box at the top of your feed offering instructions on spotting fake news. Click on it, and you’ll be whisked to Facebook’s Help Center, which will run you through the basics of spotting intellectual garbage. And that’s a useful service, as far as it goes. But while the user does indeed have some responsibility, Facebook also shares the burden and can take far more decisive action than its users.

Facebook could solve this problem themselves relatively easily. Fake news, despite its ties to the election, is a problem across the political spectrum, and there’s a large gap between sites that share alternate points of view and sites that are deliberately fabricating hoaxes or pushing hate speech in order to drive advertising clicks. Facebook could filter out these sites with an opt-out function if you want to see them, restrict how and where these sites can share content, or even just outright block them. Facebook is a private company under no obligation to give anyone a platform, but it seems unable, or unwilling, to acknowledge that simple fact.

The fundamental problem of social media is an idea that “the company” exists in a vacuum and that it’s not responsible for the actions of people who use the products it builds. For years, we’ve heard that it’s up to users to police themselves and that in any social network, the cream rises to the top.

But, again and again, we’ve seen on social media that toxic users, unchecked, can cause enormous damage. Reddit is notorious for this and is still clinging to the idea its users can stop its overwhelming trolling problem, but that’s become a pitched way. Twitter’s toxicity is so infamous that Microsoft’s chatbot became a foaming racist with a few hours’ exposure. And well before either of these sites were coded, MySpace saw a user driven to suicide.

The problem of toxic users has been an issue as long as social media has existed, and again and again, we see that the hands-off approach doesn’t work. And this toxicity can all too easily jump into the real world. Evidence is emerging that sites like this are even being used in attempts to promote genocide. Without swift action from Facebook and other companies, there’s an excellent chance fake news will translate to real violence.

Facebook can argue, and fairly to some degree, that users have a degree of responsibility for how they use Facebook. But at the same time, Facebook can’t give irresponsible, even hateful, people a bullhorn and then throw up their hands and ask what they’re expected to do. The fake news problem, simply put, would not exist to this degree without Facebook. If the company doesn’t take more responsibility, and action, they’ll be dealing with fake news for a long, long time.

Around The Web