That Facebook has a “fake news” problem is most assuredly the understatement of the 2016 presidential election. In the two years since the hotly contested American presidential race, general election, and Donald Trump’s first year in office, the social media giant (along with Twitter, Google, and others) has fought a losing battle against the might of Russian hackers, trolls, and others hell-bent on belittling the democratic process. Hence why, in a series of blog posts published on Monday, Facebook officials warned its users that it could not guarantee its positive effects on democracy at large.
According to Reuters, Facebook product manager Samidh Chakrabarti put it succinctly when he wrote, “I wish I could guarantee that the positives are destined to outweigh the negatives, but I can’t.” Even so, he noted that the company has a “moral duty to understand how these technologies are being used and what can be done to make communities like Facebook as representative, civil and trustworthy as possible.”
Katie Harbath, who serves as a best practices liaison between Facebook and governments or public officials, echoed Chakrabarti’s sentiments in another post:
“From the Arab Spring to robust elections around the globe, social media seemed like a positive. The last U.S. presidential campaign changed that, with foreign interference that Facebook should have been quicker to identify, to the rise of ‘fake news’ and echo chambers.”
Recently, the social media platform began asking its users to rate their preferred news sources based on “whether they trust” them or not. Facebook’s latest efforts to combat “fake news” follows a massive series of changes it made to the news feed feature, which now highlights posts by a user’s friends and family members instead of those published by pages maintained by media outlets.