X is piloting a program that lets AI chatbots generate Community Notes

The Social Platform X Will control a function This enables AI chatbots to generate communitynotitions.
Community Notes is a Twitter-Aara function that Elon Musk has extended under his ownership of the service, now that X. users mentioned that are part of this facts control program can contribute comments that add context to certain messages, which are then checked by other users before they appear on a message. For example, a community memorandum can appear on a post from an AI-generated video that is not clear about its synthetic origin, or as a addendum for a misleading post from a politician.
Notes become public when they reach consensus between groups that did not agree with reviews from the past.
Community Notes have been successful enough on X to Meta, Tiktok and YouTube To pursue similar initiatives, his external facts control programs eliminated in exchange for this cheap community work.
But it is still to be seen whether the use of AI-Chatbots will be useful or harmful as facts controls.
These AI remarks can be generated using X’s Grok or by using other AI tools and connecting them to X via an API. Each comment that an AI submits is treated in the same way as a ticket submitted by a person, which means that it runs through the same control process to encourage accuracy.
The use of AI in fact control seems dubious, given how often it is for AIS to hallucinate or to come up with context that is not based in reality.

According to a paper published This week by researchers working on X Community Notes, it is recommended that people and LLMs work together. Human feedback can improve AI Note generation through the learning of reinforcements, where human nut rating agents remain as a final check before notes are published.
“The goal is not to create an AI assistant who tells users what they should think, but to build an ecosystem that enables people to think more critically and to understand the world better,” says the newspaper. “LLMS and people can work together in a virtuous loop.”
Even with human controls, there is still a risk of trusting AI too badly, especially because users can include LLMs from third parties. For example, OpenAi’s Chatgpt recently experienced problems with a model that is overly sycophantic. If an LLM priority gives “helpfulness” above the accurate completion of a control check, the AI-generated comments may ultimately be inaccurate.
There is also concern that human assessors will be overloaded by the amount of comments generated by AI, which reduces their motivation to adequately complete this volunteer work.
Users must not yet expect that they will see communitynotities generated by AI to test these AI contributions for a few weeks before they roll them out wider if they are successful.




