In the wake of the election, Facebook has been under the microscope for its fake news problem. While there’s only so much Facebook can really be held accountable for, it does have some responsibility, and it’s just announced a four-point plan to deal with fake news. But will it be enough?
The plan is a mix of Facebook mining data to spot hoaxes and users reporting fake news where they find it. Facebook’s data includes a rather smart choice where they evaluate the veracity of an article based on reading time versus sharing. In other words, what matters is whether people click through, read the article, and then share it, rather than just mindlessly sharing something because of the headline. Which, admit it, we’re all guilty of (although Facebook says this will only look at extreme outliers). It’s also dropping the hammer on sites that will basically say anything for money. The company will analyze sites for content problems and will work to prevent domain spoofing so spammers can’t sneak onto the site.
For users, the job will be to report hoaxes and fake news where we spot them. As data comes in on an article, Facebook will run it back independent fact-checkers. If an article is, in their words, “disputed,” you can still share it, but it will generate a little prompt that tells you the facts of the story may not line up, hopefully getting people to check themselves before they crash into your Wall like an angry political Kool-Aid Man. Disputed stories, most importantly, won’t be able to be promoted or turned into an ad.
Does this go far enough? It’s certainly stringent, but that it relies on users is a point of concern. Facebook’s reporting tools have literally been used to suppress news outlets in the past, and while it’s unlikely The New York Times or The National Review is going to be slapped with a repeated disputed label, smaller outlets may have less success fighting trolling attempts. Even so, a plan to deal with the problem is better than no plan at all.