A version of this article in Spanish is available here.
Meta's second-in-command, Joel Kaplan, almost confirmed today that the company is ending its Third-Party Fact-Checking Program globally, affecting Facebook, Instagram and Threads. The announcement comes after CEO Mark Zuckerberg justified its end in the US on January 7, stating that "fact-checkers have simply been too politically biased and have destroyed more trust than they created, especially in the US”, something that isn't true.
Since its inception in 2016, Meta has repeatedly explained why the fact-checking program works, but before that, Facebook had operated for 12 years without collaborating with fact-checkers. The platform only changed course in 2016 because the immense problems caused by disinformation on the platform became evident, and the consequences, such as the manipulation to influence the results of the 2016 US presidential election and other scandals, could not be ignored.
What forced Facebook to create the independent fact-checking program
On December 15, 2016 Facebook announced a series of measures against hoaxes and manipulation, including collaboration with external fact-checkers. Although the platform's issues with disinformation, fake profiles, and the algorithm's recommendation of misleading content had already been reported before, it was the 2016 US presidential election that solidified concerns about the effect of social media (and disinformation on it) on democracy.
Various studies suggest that Facebook redirected voters to disinformation websites and, in the three months before the election, fake articles generated more engagement on the social network than articles from The New York Times, The Washington Post, Huffington Post, NBC News, and others. Other research revealed that Facebook and Twitter were the primary sources of traffic to fake articles, with Facebook responsible for 99% of this. Russian interference in the elections also took advantage of the social network: ads purchased by Russian agents reached 10 million users.
These problems were further confirmed two years later by the Facebook-Cambridge Analytica scandal. A British company hired by Trump's campaign used Facebook user data to create political ads during the presidential election and show them to particularly impressionable users. This didn’t only happen in the US: the company was also accused of interfering in the Brexit referendum in 2016 and using Facebook data to show propaganda, often with disinformation.
Facebook and fact-checkers
In light of all this, on November 17, 2016, the International Fact-Checking Network (IFCN) sent a letter to Zuckerberg proposing a collaboration with fact-checkers: "We believe that Facebook should start an open conversation on the principles that could underpin a more accurate news ecosystem on its News Feed. The global fact-checking community is eager to take part in this conversation."
By then, disinformation circulating on Facebook had already become a global problem, affecting various regions around the world. The IFCN's letter warned that: “Popular posts carrying fake health claims have served to peddle bogus medical cures and undermine public health campaigns around the world. False claims carried online have been used to incite violence in countries such as Nepal and Nigeria. Spurious allegations on Facebook led to a woman being beaten to death in Brazil.”
Almost a month later, Facebook announced a collaboration with fact-checkers recognized by the IFCN, such as Snopes, PolitiFact, The Associated Press, FactCheck.org, and ABC News, who would review content reported by users as false. This marked the birth of the independent fact-checking program we know today.
As explained by Maldita.es, fact-checkers have never been able to delete content from Facebook, and we believe that deleting misinformation from social media is, in most cases, a mistake. Instead, what fact-checkers like Maldita propose is that platforms use labels or warnings like "This content has been rated false by an independent fact-checker," with a link to the evidence explaining why, so that users can have more information and make their own decisions. We believe that freedom of expression also includes being allowed to lie and admit that your lie can be refuted. In the end, fact-checkers provide information, they don’t take it away.
The independent fact-checking program protects people and democracy
It is in this 2016 context that Meta's independent fact-checking program emerged, providing users with information about the veracity of the posts they encounter. Today, it operates in over 100 countries, with fact-checkers analyzing and labeling potentially false or misleading content on Facebook, Instagram, and Threads based on its level of falsity.
The results of the program are positive, according to the Meta itself. “Between July and December 2023, for example, over 68 million pieces of content viewed in the EU on Facebook and Instagram had fact checking labels. When a fact-checked label is placed on a post, 95% of people don’t click through to view it”, Meta highlighted during the 2024 European Parliament elections.
Despite this, Zuckerberg insists that fact-checkers have "destroyed trust." "After Trump was elected for the first time in 2016, the mainstream media wrote non-stop about how disinformation was a threat to democracy. We tried in good faith to address those concerns without becoming arbiters of the truth," the CEO stated in his announcement.
But it wasn’t just the media that raised alarms. It was also academic researchers and authorities who thoroughly investigated the connection between Facebook and the 2016 election results. In fact, the US Federal Trade Commission fined Facebook $5 billion for its involvement in the Cambridge Analytica scandal. The threat disinformation poses to democracy is not just an idea of journalists or the media; it’s a reality.
“With several European countries heading to the polls in 2025, platforms retracting from the fight against mis- and disinformation allows and potentially even invites election interference, especially from foreign actors”, responded the European Fact-Checking Standards Network (EFCSN) to Zuckerberg’s statements.
Considering how Facebook was before the independent fact-checking program, the title of the video posted by the CEO, "It's time to get back to our roots around free expression", takes on another meaning. Those roots, at least those from 2016, allowed the social network to be used to decide an election (if not several). The development of the Community Notes system and collaboration with fact-checkers that will replace the program will be crucial in determining whether we go back to 2016 or not.
Verifiers and 'community notes'
At Maldita.es, we believe that the work of fact-checkers and Community Notes not only do not exclude each other but should complement each other. It’s true that the system as it exists on X is not perfect: for example, more than 90% of tweets with falsehoods about the DANA debunked by Maldita had no Community Notes, but that doesn’t always mean users aren’t proposing those notes—it just indicates that X's algorithm isn't showing them. Why?
The proposed notes are voted as helpful or not by users participating in Community Notes, but the algorithm only makes them visible when users who typically disagree with each other on note evaluations agree that the specific note is helpful. Essentially, it evaluates the consensus of people with "different political ideologies". On issues that generate more polarization, this emphasis on consensus almost ensures that many useful Community Notes with reliable information never get shown.
Still, at Maldita.es, we believe Community Notes can be a good tool against disinformation. Last year, over 850 notes citing Maldita.es articles were proposed on X, more than any other fact-checker in the European Union. What is needed is for the platforms using them to guarantee a few things:
That notes with quality sources and expert knowledge are favored over the "consensus" among users who often disagree.
That notes appear faster on the most dangerous and viral disinformation.
That organized groups or users with multiple accounts are prevented from manipulating the system.
That lying and repeatedly receiving notes has consequences for the user, such as revoking their blue verification check or the ability to monetize.
That there are guarantees that platforms enabling them cannot interfere in the process and remove notes due to external pressures.
In this article, we tell you more about our stance on Community Notes.