Mark Zuckerberg has announced that Meta is going to “get rid of fact-checkers and replace them with Community Notes similar to X”. At Maldita.es Foundation we believe that the work of fact-checkers and the involvement of communities against disinformation are not contradictory, but rather both are essential and complement each other. We've also studied Community Notes since Twitter wasn't X, Elon Musk wasn’t the owner and the program was called Birdwatch, so we are aware of where the mistakes are and how it could work better against disinformation.
Fact-checkers🤝❤️Community: we cannot work without them
At Maldita.es we investigate and verify hoaxes every day, and we find out about a large part of them thanks to our community. We don't have eyes to look at everything, so ‘los malditos’ put us on the trail whenever they find something that sounds strange to them and send it our way asking if it's true. Only on our WhatsApp chatbot +34 644 22 93 19, we receive hundreds of queries a day. That community is essential for us and that is why we believe that community notes can help a lot against misinformation if the system works correctly.
If Zuckerberg follows X's model, as he has said, he should be aware of its strengths and weaknesses. In X, notes are proposed by users when they come across a post that they think needs context, which is a great idea because it helps identify misinformation. Then an algorithm decides whether they are visible next to the original publication based on the consensus on their “helpfulness” according to the ratings of different users, something that has worked much worse. What is needed for Community Notes to be effective?
Notes with quality sources and expert knowledge to be favored over “consensus” among users who usually disagree
Notes to appear faster in the most dangerous and viral misinformation
Preventing organized groups or users with multiple accounts from manipulating the system
Repeatedly lying and receiving notes to have some consequence for the user, such as removing the blue verification check or the ability to monetize
Guarantees that the platforms that enable them cannot interfere in the process and withdraw notes due to pressures beyond users' control.
Why doesn't X show many community notes that would debunk viral disinformation?
One of the big problems with the Community Notes in X is that many notes that are proposed and debunk viral disinformation are never shown to users. For example:
Of the posts on X that had misinformation denied by Maldita.es about the floods in Valencia, more than 90% did not have a visible note and they accumulated more than 50 million impressions.
In the 2024 European Parliament elections, only 15% of the posts already debunked by European fact-checkers had a note even if some content received more than one and a half million views.
In many cases it is not that the community has not proposed notes that debunk those hoaxes, but they are not publicly displayed along with the tweet. A study by the Center for Countering Digital Hate found that notes about the 2024 US Elections were not visible to users 74% of the time although they had been suggested. Why? Proposed notes are rated as useful or not by users who participate in Community Notes, but the algorithm only makes them visible when users who usually disagree when voting agree that it is useful. Essentially, it evaluates the consensus of people with “different political ideologies”.
On the most polarizing issues, that emphasis on consensus practically ensures that many Community Notes with reliable information are never shown to users. Also, according to some research, the display of the notes arrives “too late to effectively reduce interaction with disinformation in the early phase of dissemination”. During times of special public relevance such as natural disasters, terrorist attacks or electoral campaigns, this is particularly worrying.
Is X analyzing users' political views to apply Community Notes?
Regarding the requirement for consensus before a note becomes visible on a tweet, X's owner, Elon Musk, has stated that "many people with diverse viewpoints must agree that it is needed and correct." He also added that the platform's Community Notes participants are "almost evenly balanced across the political spectrum." This has led some users to question whether X is monitoring the political opinions of its users to ensure these "diverse viewpoints".
In X's own explanation about Community Notes, the platform clarifies that its algorithm assesses "different perspectives" based on how users have rated notes in the past. Users who "tend to rate notes differently are likely to have different viewpoints." However, X also acknowledges that it works to "specifically understand the usefulness of notes for people from different political perspectives," which suggests some level of user classification based on ideology.
The European General Data Protection Regulation (GDPR) prohibits the processing of personal data that reveals an individual’s "political opinions," unless explicit consent is provided or the data has been "manifestly made public" by the individual.
Risks of manipulation and falsehoods in X’s Community Notes
To participate in proposing on Community Notes, X users only need an account older than six months, a confirmed phone number, and no recent platform violations. There is no requirement to demonstrate expertise in any subject, nor safeguards against users who may operate multiple accounts to coordinate and amplify their notes.
Studies indicate that partisan bias among note contributors is significant, with users often targeting tweets from those with opposing political views. Unlike professional fact-checkers, many contributors use notes to refute opinions or claims they simply disagree with. Additionally, organized groups of users attempt to manipulate the algorithm to make their notes visible.
Another major issue is the lack of rules regarding the quality of sources cited in the notes. According to this study, the most commonly used source is another tweet, followed by Wikipedia articles. In some cases, "debunks" rely solely on statements by the subject of the claim denying its validity. There are instances where the notes themselves contain misinformation.
No matter how many Community Notes you get, there are no consequences
Even when accounts accumulate dozens or hundreds of Community Notes, X does not take action against many of these users, who retain their blue checkmarks. These blue badges signify verified, paid accounts, whose content the platform boosts. In fact, the same study by Maldita.es that found that only 15% of false claims about the Valencia floods on X were accompanied by a Community Note, assessed that 45% of accounts spreading these falsehoods had blue checkmarks.
Even repeat offenders who receive multiple Community Notes for falsehoods retain their badges, perpetuating a misleading sense of legitimacy and benefiting from the platform's algorithmic prioritization.
Fact-checkers can fix many of the problems with Community Notes
Although Mark Zuckerberg speaks of Community Notes as a way to “replace” fact-checkers, the truth is that a collaboration between the users who propose Community Notes and fact-checkers who work with a professional methodology would be the solution to many of the program’s shortcomings. All the ability of users to detect dangerous disinformation and the ability of fact-checkers to verify that the sources are of quality, add more when necessary and try to get the notes visible as quickly as possible.
At Maldita.es, we find the idea of opposing "fact-checkers" and "community" to be counterproductive. Community engagement has always been central to our work. Over the past year, X users have proposed more than 850 Community Notes citing Maldita.es articles as evidence, with around 100 deemed "helpful" and displayed on tweets. We are the most referenced verification outlet in Community Notes in the European Union and rank as the fifteenth most cited source in notes in Spanish.