Meta updates chatbot rules to avoid inappropriate topics with teen users

Meta says it changes the way in which the AI chatbots trains to prioritize Teen Safety, a spokesperson told WAN exclusively, after a research report on the lack of AI guarantees of the company for minors.
The company says it will now train chatbots to no longer go with teenage users on self -harm, suicide, disorderly eating or possibly inappropriate romantic conversations. Meta says that these are interim changes and the company will release robust, long -term safety updates for minors in the future.
Meta spokesperson Stephanie Otway acknowledged that the chatbots of the company could previously talk to teenagers about all these topics in ways that the company considered appropriate. Meta now acknowledges that this was a mistake.
“As our community grows and technology evolves, we constantly learn about how young people can deal with these tools and strengthen our protection accordingly,” said Oberay. “Terwijl we onze systemen blijven verfijnen, voegen we meer vangrails toe als een extra voorzorgsmaatregel-inclusief het trainen van onze AIS om niet met tieners over deze onderwerpen te gaan, maar om hen te begeleiden naar deskundige bronnen en het beperken van tiener toegang tot een selecte groep AI-karakters voorlopig. Deze updates zijn al bezig, en we zullen onze aanpak blijven aanpassen om te helpen bij het vinden van tieners die veilig zijn, leeftijdsarme ervaringen met Ai. ”
In addition to the training updates, the company will also limit teenage access to certain AI characters who can have inappropriate conversations. Some of the user made AI characters that make Meta available on Instagram and Facebook, contain sexualized chatbots such as “Step Mom” and “Russian Girl”. Instead, teenagers only have access to AI characters who promote education and creativity, Otay said.
The policy changes are announced only two weeks later A Reuters research An internal meta policy document that seemed to allow the chatbots of the company to allow sexual conversations with minor users. “Your youthful form is a work of art,” read a passage mentioned as an acceptable response. “Every centimeter of you is a masterpiece – a darling that I cherish deeply.” Other examples showed how the AI tools should respond to requests for violent images or sexual images of public figures.
Meta says that the document was not consistent with the broader policy and has since been changed – but the report has caused a persistent controversy about potential risks in the field of child safety. Shortly after the report was released, Senator Josh Hawley (R-MO) launched an official probe in the AI policy of the company. Moreover, a coalition of 44 state lawyers is general wrote to a group AI companies, including MetaEmphasizing the importance of child safety and specific quotation of the Reuters report. “We are revolted uniform by this apparent contempt for the emotional well-being of children,” the letter is, “and alerted that AI assistants perform behaviors that appear to be prohibited by our respective penal laws.”
WAN event
San Francisco
|
27-29 October 2025
Oweray refused to comment on how many AI Chatbot users from Meta are minors and would not say whether the company expects his AI user file to fall as a result of these decisions.
UPDATE 10:35 AM PT: This story has been updated to note that these are interim changes and that Meta intends to further update its AI safety policy in the future.




