In January, Meta announced the end of third-party fact-checkers on Facebook, Instagram, and Threads. The tech giant is betting on a new, community-driven system called Community Notes that draws on X’s feature of the same name and uses the X’s open algorithm as its basis. Meta is rolling out the feature on March 18. Anyone who wants to write and rate community notes can sign up now. The rollout will be throttled and, initially, notes won’t appear publicly as Meta claims it needs time to feed the algorithm and ensure this system is working properly.
The promise is enticing. A more scalable, less biased way to flag false or misleading content, driven by the wisdom of the crowd rather than the judgment of experts. But a closer look at the underlying assumptions and design choices raises questions about whether this new system can truly deliver on its promises. The concept, its UX implementation, and underlying technology surfaces challenges that, in my conversations with Meta’s designers, don’t seem to have any clear, categorical answer. It feels more like a work-in-progress and not a clear-cut answer to the shortcomings of third-party fact-checking.
Currently, Meta’s Community Notes are exclusively accessible on mobile devices within the Facebook, Instagram, and Threads apps. The mobile-first approach likely reflects the platform’s primary user base and usage patterns. Users who are eligible to contribute to Community Notes, after meeting specific criteria such as having a verified account and a history of platform engagement, can apply to be a contributor and add context to posts they believe contain misinformation (200,000 have already done so in the U.S., Meta tells me).
Once in, they’ll find an option within the post’s menu to “Add a Community Note.” This triggers an overlay screen with a simple text editor that has a 500-character limit. The design also requires users to include a link, adding a layer of credibility to the note (although the link may not be a reliable source).
Once a note is submitted, it’s evaluated by other Community Notes contributors. Meta uses an X’s open-source algorithm—which may evolve later as they learn more about how it all really works, Meta says—to determine whether the note is helpful and unbiased. The algorithm considers various factors, like the contributor’s rating history and whether individuals who typically disagree with certain types of notes approve of it or not. Allegedly, the latter is the firewall to avoid coordinated activism against certain types of posts (although the algorithm hasn’t been proved to be totally effective on X).
The evaluation interface presents contributors with a clear and straightforward way to rate the note’s quality and helpfulness: a simple thumbs up/thumbs down system, which leads to another overlay menu in which they can select why they chose their option. Meta claims that if a note reaches a consensus among contributors with diverse viewpoints, it will then be publicly displayed beneath the original post, providing additional context without directly altering the post’s visibility or reach. The design aims to present the note as an informative supplement rather than a definitive judgment, allowing users to make their own informed decisions.
AN UNSOLVABLE PROBLEM?
While the idea of crowdsourced fact-checking holds some theoretical appeal, Meta’s implementation appears to be riddled with the same vulnerabilities and unanswered questions that have affected X. On Elon Musk’s platform, Community Notes have failed to actually fact-check. They also suffer from extreme latency, or the amount of time that notes take to appear. A report from Bloomberg found that on average a typical note took seven hours to show up on the platform, but it can take as much as 70 hours, meaning false posts can go viral before they get checked. Community Notes on X have also failed to reduce engagement with false information. And because just 12.5% of Community Notes are seen, it denies their intrinsic value to the community. And let’s not forget the potential to get gamed by particular interests. Meta’s own oversight board has pointed out “huge problems” with the plan.
Still, the company’s rationale to favor Community News over third-party fact-checkers hinges on two key arguments: scalability and reduced bias. Traditional fact-checking is a labor-intensive process that struggles to keep pace with the deluge of content on social media. That makes sense. By enlisting community members to flag and contextualize posts, Meta hopes to cover a much wider range of potentially problematic material.
The social media company also argues that relying on a diverse group of contributors will mitigate the perceived bias of professional fact-checkers, who are often accused of political partisanship. The company and the designers cited a 2021 study by Allen et al. published in the scientific journal Science Advances titled “Scaling up fact-checking using the wisdom of crowds” as evidence that balanced crowds can achieve accuracy comparable to experts.
CRACKS IN THE FOUNDATION
A critical examination of the study reveals a significant gap between the research and Meta’s proposed implementation. The study explicitly required political balancing of raters to achieve accurate results. Meta, on the other hand, has not clearly explained how it will ensure viewpoint diversity among contributors without collecting sensitive political data. In lieu of assessing a user’s past interactions on the platform, Meta plans to simply look at contributors’ past rating history in notes to assess whether or not a diversity of viewpoints has been achieved.
Furthermore, the study only assessed the accuracy of headlines and ledes, not full articles. This raises serious concerns about the system’s ability to handle complex or nuanced misinformation, where the truth may lie in the details. The community note’s limit of 500 characters adds to this concern. When I asked how it would be possible to truly add deep context to a post in which the truth is not binary (and let’s face it: it almost never is), there wasn’t a clear answer but a silence followed by the explanation that they could always expand the length if users demand it. Links to external sources can be included to provide more in-depth information, though they admitted that this adds another step for the reader to take. It’s hard to imagine people clicking through in this era of content fast food.
The company doesn’t have a plan for addressing one of the biggest issues with tackling misinformation—which happens both via Community Notes and third-party fact-checking: the implied truth effect. Research shows that “attaching notes to a subset of fake news headlines increases perceived accuracy of headlines without warnings.” In the absence of these new notes, people might make the false assumption that a post is true. Meta’s designers say it will takes about the same time it takes on X for notes to go through the community fact-checking process, which means there will be plenty of time for fake news to go viral. Furthermore, X has shown that only a small percentage of posts get annotated, so the implied truth effect will, no doubt, be felt in Meta’s implementation of the same technology—at least in its current state. The old third-party fact-checking suffered from similar latency problems.
NO PENALTY
Under the previous system, posts that fact-checkers identified as false or misleading had their distribution reduced. Community Notes, in contrast, will simply provide additional context, without impacting the reach of the original content. This decision flies in the face of research suggesting that warnings alone are less effective than warnings combined with reduced distribution.
Meta says it wants to prioritize providing users with context rather than suppressing content. Its belief is that users can make their own informed decisions when presented with additional information. The fear is that demoting posts could lead to accusations of censorship and further erode trust in the platform.
Meta says it will be monitoring the system, evaluating the latency, coverage, and the downstream effects of viewership and utilizing those metrics to guide future work, refinements, testing, and iterations. But Meta says there has been no A/B testing of Community Notes to see how it performs versus third-party fact-checking. Rather, the company is using this initial rollout phase as a public beta test, as a way to feed the algorithm with data from contributors so the system can get up and running.
FEAR, UNCERTAINTY, LOTS OF DOUBT
Twitter rolled out its proto version of Community Notes in 2020. Called Birdwatch, it continued to evolve with mixed results ever since Elon Musk took over and rebranded it with the current moniker. While Meta will use X’s open source algorithm as the basis of its rating system, feeding it with enough information to be operative could take quite a while. According to the Meta designers, the initial lack of public visibility is intended to allow them to train and thoroughly test the system and identify any potential problems before rolling it out to a wider audience. Meta isn’t saying how the notes will appear to all users, only pointing out in a press release that “the plan is to roll out Community Notes across the United States once we are comfortable from the initial beta testing that the program is working in broadly the way we believe it should.”
Meta says it will gradually increase the visibility of the notes as it gains confidence in the system’s effectiveness, but did not provide a specific timeline or metrics for success. In a bid for transparency, Meta will release the algorithms that it uses.
It’s yet to be seen if Meta’s Community Notes will be more effective than the previous third-party fact-checking process. Nothing in the user experience suggests that it can solve the problems that X has had; logically, we can expect Meta to have many of the same issues, as well. In a historical moment where the truth is treated like malleable material, we could use a lot more certainties. Meta may have missed the chance to scientifically develop a new, non-derivative user experience that could avoid X’s problems. Instead, we are getting Musk’s broken toy with a coat of paint and the hope that, magically, it may work this time.