X is piloting a program that lets AI chatbots generate Community Notes

Jul 1, 2025 - 20:30
 0  0
X is piloting a program that lets AI chatbots generate Community Notes

The social platform X will pilot a feature that allows AI chatbots to generate Community Notes.

Community Notes is a Twitter-era feature that Elon Musk has expanded under his ownership of the service, now called X. Users who are part of this fact-checking program can contribute comments that add context to certain posts, which are then checked by other users before they appear attached to a post. A Community Note may appear, for example, on a post of an AI-generated video that is not clear about its synthetic origins, or as an addendum to a misleading post from a politician.

Notes become public when they achieve consensus between groups that have historically disagreed on past ratings.

Community Notes have been successful enough on X to inspire Meta, TikTok, and YouTube to pursue similar initiatives — Meta eliminated its third-party fact-checking programs altogether in exchange for this low-cost, community-sourced labor.

But it remains to be seen if the use of AI chatbots as fact-checkers will prove helpful or harmful.

These AI notes can be generated using X’s Grok or by using other AI tools and connecting them to X via an API. Any note that an AI submits will be treated the same as a note submitted by a person, which means that it will go through the same vetting process to encourage accuracy.

The use of AI in fact-checking seems dubious, given how common it is for AIs to hallucinate, or make up context that is not based in reality.

Image Credits:Research by X Community Notes (opens in a new window)

According to a paper published this week by researchers working on X Community Notes, it is recommended that humans and LLMs work in tandem. Human feedback can enhance AI note generation through reinforcement learning, with human note raters remaining as a final check before notes are published.

“The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that empowers humans to think more critically and understand the world better,” the paper says. “LLMs and humans can work together in a virtuous loop.”

Even with human checks, there is still a risk to relying too heavily on AI, especially since users will be able to embed LLMs from third parties. OpenAI’s ChatGPT, for example, recently experienced issues with a model being overly sycophantic. If an LLM prioritizes “helpfulness” over accurately completing a fact-check, then the AI-generated comments may end up being flat out inaccurate.

There’s also concern that human raters will be overloaded by the amount of AI-generated comments, lowering their motivation to adequately complete this volunteer work.

Users shouldn’t expect to see AI-generated Community Notes yet — X plans to test these AI contributions for a few weeks before rolling them out more broadly if they’re successful.

Adblock test (Why?)

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0