Fact-Checking in the Age of Community Notes
What Meta’s Move to Crowdsourcing Reveals About Misinformation and Truth

In one of the first major announcements of 2025, Meta declared it would end its third-party fact-checking program across Facebook, Instagram, and Threads (Kaplan, 2025). Instead, the company is adopting a crowd-sourced fact-checking system, Community Notes, modeled after the one implemented on Twitter/X. While Meta will still address “illegal and high-severity violations,” the company is stepping away from professional moderation in favor of user-contributed notes and voting.
The response was immediate and divided along ideological lines. Critics labeled it a dangerous retreat from truth, with The Guardian warning that Meta was ushering in a “world without facts” (Stafford, 2025). Supporters, however, praised the move, touting it as a “huge win for free speech” (Miller, 2025). But behind this polarized debate lies an essential question: How effective are either of these systems—professional fact-checking or crowd-sourced approaches—in helping us navigating the misinformation landscape?
To answer this question, we must go beyond simply comparing the pros and cons of these methods. As Smets (2025) pointed out, both professional and crowd-sourced fact-checking face the same fundamental challenge: the biases we bring to interpreting and presenting information. While fact-checking aims to verify factual claims, it often gets entangled with subjective judgments, particularly when dealing with complex or value-laden issues. This tendency, coupled with our tendency toward belief-based evidence rather than evidence-based beliefs, complicates any effort to create a perfect system for combating misinformation.
In this post, I’ll revisit arguments I made back in 2022 on fact-checking, integrate some of the new insights from recent debates, and explore what Meta’s decision reveals about the limitations—and possibilities—of both professional and crowd-sourced systems.
The Problem with Fact-Checking
In my original post on fact-checking, I focused on the challenges inherent in how facts are identified, checked, and communicated—not on abstract questions of truth. A fact, by definition, should not be capable of bias—it is “something that is known to have happened or to exist.”
As I argued, many fact-check articles stray from their core task of verifying specific factual claims, veering into dissecting strength and cogence of arguments, which introduces interpretation and subjectivity. This can transform what should be straightforward fact-checks into a dissection of what those facts (or inaccuracies, in some cases) are used to argue or conclude by the individual being fact-checked. What inevitably results are rebuttals or critiques, relying on inductive arguments rather than staying grounded in factual verification1.
The further fact-checkers move from their primary purpose—checking facts—the greater the likelihood that biases will creep in (AllSides, 2023). And as Smets (2025) points out, this can lead to a blurring of the lines between fact and the reasonableness (or truth) of the inferences that come from those facts.
But even when fact-checkers stay focused on checking the facts, bias is still often present because the decision still has to be made about who to fact-check. Hopkins (2024) showed that, in one instance during the 2024 Presidential election, 30 news sources fact-checked a faulty claim made by Trump, while, two days later, only 2 news sources fact-checked a similarly faulty claim made by Harris.
The tendency for bias to creep in is one reason why many people distrust fact-checking, perceiving it as partisan or agenda-driven. This distrust has created an opening for alternative approaches, such as reliance on a Community Notes system. But does this crowd-sourced approach truly address the limitations of traditional fact-checking—or does it merely replace one set of challenges with another?
Crowdsourcing Truthfulness
Meta’s decision to introduce Community Notes marks a significant shift in how misinformation is addressed on its platforms. Modeled after the approach implemented on Twitter/X, Community Notes allows users to collaboratively provide and vote on notes that add context to potentially misleading posts. The system, which relies on consensus from a diverse range of perspectives, promises to democratize fact-checking. But does crowdsourcing truthfulness actually work—and what are the risks?
At its core, Community Notes operates on the idea that collective intelligence can help identify and correct misinformation. Users submit notes to flag misleading claims, and others vote on their accuracy and helpfulness. When a note reaches a consensus threshold—ideally from users with a range of perspectives—it becomes visible to the broader platform audience. While addressing factual errors is part of its function, Community Notes often focuses on providing additional context or supplementary information that may influence how users interpret a claim. In this way, the system aims to promote a more balanced understanding of the claim and its implications rather than solely verifying or refuting factual accuracy.
Stafford (2025) provided research suggesting that Community Notes can achieve a high degree of accuracy. For instance, a study analyzing Community Notes on Twitter/X found that 97% of notes on COVID-19 misinformation were rated as accurate by medical professionals. Additionally, notes appended to misleading tweets have been shown to reduce resharing and even increase the likelihood of the original post being deleted. These findings demonstrate that crowd-sourced fact-checking can, in some instances, be an effective tool against misinformation.
However, Stafford also identifies key limitations of the system. One major issue is speed: Community Notes often take hours or even days to reach consensus, while misinformation typically spreads within the first few hours of being posted. This delay means that by the time corrections are visible, much of the damage may already be done. Additionally, most misinformation never receives a Community Note, and even when notes are created, few meet the thresholds needed for visibility. For instance, during the 2024 U.S. election, fewer than 6% of notes were deemed helpful and shown to other users, raising questions about the system’s scalability and reach.
Finally, the potential for bias in crowd-sourced systems cannot be overlooked. While the Community Notes algorithm attempts to mitigate polarization by accounting for voting patterns across diverse perspectives, it is not without limitations. As Smets (2025) highlights, achieving consensus does not guarantee accuracy. Balancing contradictory views may help surface diverse perspectives, but it does not address the deeper issue of whether the resulting consensus reflects objective truth. This limitation underscores the challenges of relying on collective agreement as a substitute for more rigorous forms of verification.
Can We Trust Crowdsourced Truth?
Community Notes represents a significant departure from traditional fact-checking models, and while it has its flaws, the alarm surrounding its adoption may be overstated. Yes, there are legitimate concerns about the system’s scalability, speed, and vulnerability to biases. However, these limitations do not mean that Community Notes is doomed to fail. In fact, the research suggests that when it works, it works quite well—providing accurate, actionable context that reduces the spread of misinformation.
One advantage of Community Notes over professional fact-checking lies in its transparency. Traditional fact-checking organizations often face criticism for their lack of clarity regarding how they select topics or sources to evaluate. In contrast, Community Notes relies on user-contributed content and votes, which are openly visible. This level of openness allows for greater scrutiny and accountability, helping to build trust in the process—even if the system is imperfect. That said, the transparency of Community Notes should not be overstated; while the user contributions are visible, the algorithms used to determine which notes gain visibility remain opaque and could themselves introduce biases. Addressing this issue could further strengthen the platform’s credibility.
The broader question, then, is whether the departure from professional fact-checking is as catastrophic as some critics claim. As Williams (2024) argued, much of the panic over misinformation reflects deeper societal anxieties rather than the actual prevalence or impact of false information online. Clear-cut misinformation, like fabricated news stories, makes up a tiny fraction of the media people consume. Moreover, engagement with such content tends to be concentrated among small, ideologically motivated groups who are already predisposed to believe it. This suggests that the root causes of misinformation lie not in the content itself but in the societal divisions that fuel its spread. Neither professional fact-checking nor Community Notes alone can resolve these underlying issues, which are deeply tied to polarization and distrust.
Similarly, Smets (2025) highlights that truth itself is rarely as straightforward as many fact-checking initiatives assume. While empirical facts—like the result of a simple arithmetic problem—are relatively easy to verify, much of what fact-checkers address involves complex, value-laden issues where consensus, not objective truth, dominates. For instance, debates over whether a particular policy is (in)effective often depend on subjective judgments about what outcomes are most important, such as economic growth versus environmental preservation, not to mention which trade-offs are deemed acceptable. In such cases, both professional and crowd-sourced systems grapple with the same fundamental limitation: the ambiguity of truth in a world where beliefs and values shape what we accept as fact.
While professional fact-checking attempts to address this ambiguity through expert analysis, critics argue that the process can introduce its own biases, as the selection of which experts to consult is itself a subjective choice. And as Williams (2024) pointed out, “the most dangerous misinformation comes from the powerful…who have every incentive to paint themselves as objective truth-tellers…” Community Notes, on the other hand, seeks to balance diverse perspectives but risks producing consensus that reflects popular beliefs rather than accuracy.
Given these realities, perhaps the transition to Community Notes should be seen not as a retreat but as an acknowledgment of the complexities that involve both the verification of facts and the use of facts to make cogent arguments. Crowdsourcing isn’t perfect, but it democratizes the process of contextualizing information, creating space for a range of perspectives to contribute while also increasing transparency. While it won’t solve the underlying societal issues that fuel misinformation, it may help mitigate some of their impact by encouraging users to engage critically with the content they encounter. In this sense, the primary strength of Community Notes may not lie in its ability to produce definitive truth, but in fostering a culture of scrutiny and dialogue around contested claims.
Ultimately, the success of Community Notes—and any fact-checking system—depends less on whether it perfectly polices misinformation and more on whether it helps people develop a more nuanced, critical approach to the information they consume. Instead of seeing the decline of professional fact-checking as a crisis, perhaps we should view it as an opportunity to rethink how we pursue truth in the digital age.
Actual fact-checking is a deductive argumentation process. Either facts can be verified or they cannot. What those facts means or don’t mean is an inductive argument, and such arguments inevitably introduce subjectivity.
Mentioned Substack posts:
Thank you for the interesting post!
However I'm a bit puzzled by the first footnote, which seems to be quite central to the reasoning in this article.
My confusion can be summarized by the following question: How can we deductively verify empirical facts? It seems that deductive process of argumentation about empirical matters essentially depends on some observational propositions, which are established on the inductive grounds.