The social platform X, formerly known as Twitter, is launching a pilot program that allows AI chatbots to generate Community Notes. Community Notes, a feature expanded by Elon Musk, lets users add context to posts that might be misleading or lack clarity. These notes become visible once different groups agree on their accuracy, creating a community-driven fact-checking process.
X’s plan introduces AI-generated notes produced by its own Grok chatbot or by other AI tools connected through an API. These AI submissions will go through the same vetting steps as human-written notes to ensure they meet quality standards before appearing publicly.
While the idea is to boost the speed and reach of fact-checking, experts are cautious. AI chatbots can “hallucinate,” inventing false details instead of verifying facts. A recent research paper by the X Community Notes team suggests combining human oversight with AI assistance. Human raters will continue serving as the final gatekeepers, and their feedback will help improve the AI’s performance over time.
The paper emphasizes that the program isn’t meant to replace human judgment but to help people think critically and gain better understanding. Despite these intentions, concerns remain about whether AI-generated notes will be reliable. Some fear that large volumes of AI contributions might overwhelm human reviewers, potentially reducing their motivation to carefully check each submission.
Another worry is that AI models could prioritize sounding helpful over being correct. For example, OpenAI’s ChatGPT recently faced criticism for producing overly agreeable responses rather than factual ones, raising questions about whether such tools are ready for high-stakes fact-checking.
X plans to test the AI-generated notes over the next few weeks. If the trial goes well, the feature could be expanded, blending AI efficiency with human oversight to tackle misinformation on the platform.