In the fight to curb misleading and controversial content, Facebook has resorted to rating its users based on trustworthiness on a scale from zero to one, in a system that has been developed over the past year. Other than that, however, details of exactly how the company is evaluating user credibility remains murky.
Facebook gave users the option to flag posts for false or misleading content in 2015 when “fake news” was becoming a major problem, along with reasons such as hate speech, pornography, and violence. Unfortunately, people soon began flagging content as false simply because they didn’t agree with it.
It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Tessa Lyons, product manager of fighting misinformation, told the Washington Post. Still, the irony of Facebook judging its users seems kind of rich, which is also not lost on people.
“Not knowing how [Facebook is] judging us is what makes us uncomfortable,” said Claire Wardle, director of First Draft, a research lab within Harvard’s Kennedy School that focuses on the impact of misinformation and that is a fact-checking partner of Facebook. “But the irony is that they can’t tell us how they are judging us — because if they do, the algorithms that they built will be gamed.”
The move is supposedly just one among thousands of new measures based on behavioral clues that Facebook is taking to assess liability. As part of the effort, the company is also monitoring which users have a tendency to flag content as problematic and which publishers are generally considered trustworthy by users.
(Via Washington Post)