Here’s How Facebook Plans to Crack Down on Fake News
Facebook has figured out on how to stop fake news on its site. It will first ask journalists if the news is fake or not. If it turns out to be fake, it will discourage users from sharing it. The plan is clean, simple and human.
Previously, it was seen that Facebook was probably investing in AI to crack down on fake news. However, according to recent developments, the social media giant is partnering with other media companies which will verify the authenticity of news posted on the site. These media companies are mostly news companies and fact checking groups that specializes in flagging fake news.
Here’s how it will work
Zukerberg has laid out a plan last month. Here is that plan, simplified:
- Facebook will first ask users to report fake news by clicking on a button above the post, if they think that the news is dubious. It will also use other software to look at signs of fake news (probably this is where their AI will come in too).
- If the news is flagged as fake by the software or users, the social media giant will take it to the consortium of journalists to do a fact-check.
- If the journalists find out that the story is bogus, the story will be flagged by Facebook as “disputed by third-party fact-checkers.”
- Also, a “disputed” banner will be attached to the story as you will view it in your news feed. Also, Facebook will tune their algorithm so that such stories do not show up that much.
- If an user wants to share such a story, Facebook is prompt the user asking if they really want to share the story.
- Finally, Facebook wants to make it harder for publishers to make profit by publishing fake news.
The last point is a bit vague as it does not actually say what the company will do. They are working with four news organizations/fact-checking groups — ABC News, Politifact, FactCheck and Snopes — which have agreed to vet potentially fake stories that Facebook sends them and publish their findings.