Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims

Texas Attorney General Ken Paxton has started an investigation into both Meta Ai Studio and Character. For “potential involvement in misleading commercial practices and himself misleading as a tools for mental health care” press release Published on Monday.
“In today’s digital age, we have to keep fighting to protect Texas children against misleading and exploiting technology,” says Paxton. “By introducing themselves as sources of emotional support, AI platforms can mislead vulnerable users, in particular children, to believe that they receive legitimate mental health care. In reality, they are often recycled, generic reactions set to tune into harvested personal data and disguised as therapeutic advice.”
The probe comes a few days after senator Josh Hawley announced a study of meta after a report that discovered that his AI chatbots were inappropriate with children in interaction, including by flirting.
The office of the attorney general of Texas has accused Meta and Character.ai From creating AI personas who present as “professional therapeutic aids, despite the lack of the right medical references or supervision.”
Among the millions of AI personas that are available on character.ai, a bone created by the user called Psychologist has seen a big question with the young users of the startup. In the meantime, Meta does not offer therapiebots for children, but there is nothing that prevents children from using the Meta AI chatbot or one of the personas made by third parties for therapeutic purposes.
“We label ais clearly and to help people to better understand their limitations, we include a disclaimer that reactions are generated by AI – not people,” Meta spokesperson Ryan Daniels told WAN. “These AIs are not licensed professionals and our models are designed to send users to find qualified medical or safety professionals when needed.”
However, WAN noted that many children cannot understand such disclaimers – or simply ignore. We have asked Meta which extra guarantees it need to protect minors using the chatbots.
WAN event
San Francisco
|
27-29 October 2025
For its part, the character includes prominent disclaimers in every chat to remind users that a ‘character’ is not a real person, and everything they have to say should be treated as fiction, according to a character. Spokesman. She noted that the startup adds extra disclaimers when users make characters with the words ‘psychologist’, ‘therapist’ or ‘doctor’ not to trust it for any professional advice.
In his explanation, Paxton also noted that although AI -Chatbots claim confidentiality, their “service conditions reveal that user interactions have been recorded, followed and operated for targeted advertisements and algorithmic development, so that serious concerns about privacy -scores, data and false advertisements are evoked.”
According to Meta’s privacy policyMeta does collect prompts, feedback and other interactions with AI chatbots and about metaservices to “improve ais and related technology”. The policy does not explicitly say anything about advertising, but it does state that information can be shared with third parties, such as search engines, for ‘more personalized output’. Given the advertising-based business model from Meta, this translates effectively into targeted advertisements.
Character.ai’s privacy policy Also emphasizes how the identification data of the start -up logbooks, demography, location information and more information about the user, including browsing behavior and APP use platforms. It follows users in advertisements on Tiktok, YouTube, Reddit, Facebook, Instagram and Discord, who can link it to a user’s account. This information is used to train AI, to adjust the service to personal preferences and to offer targeted advertisements, including sharing data with advertisers and analysis providers.
A character. Spokesperson said the startup “just starts to explore targeted advertisements on the platform” and that those explorations “have not involved in the use of the content of chats on the platform.”
The spokesperson also confirmed that the same privacy policy applies to all users, even teenagers.
WAN has asked Meta that such a tracking is also done on children and will update this story when we hear again.
Both meta and character say that their services are not designed for children under the age of 13. That said, Meta came under fire because of the failure of police accounts made by children under the age of 13, and the child-friendly characters of the character are clearly designed to attract younger users. The CEO of the startup, Karandep Anand, even said that his six -year -old daughter Uses the chatbots of the platform under his supervision.
That type of data collection, targeted advertisements and algorithmic exploitation is exactly what legislation such as KOSA (Kids Online Safety Act) is intended to protect against. Kosa was too Weed to pass last year with strong dual support, but it continued to chase after a large pushback of lobbyists in the technical industry. Meta in particular has used a formidable lobbying machine and legislators warned that the broad mandates of the bill would undermine its business model.
Kosa was re-introduced in May 2025 by senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT).
Paxton has done civilian research needs for the requirements of civil research – legal orders that a company require to produce documents, data or testimony during a government round – to the companies to determine whether they have violated Texas consumer protection legislation.
This story has been updated with comments from a character. Spokesman.




