AI

FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others

The FTC announced on Thursday that it is one inquiry In seven technology companies that make AI Chatbot Companion products for minors: Alphabet, Characterai, Instagram, Meta, OpenAi, Snap and Xai.

The federal supervisor tries to learn how these companies evaluate the safety and income of Chatbot companions, how they try to limit negative effects on children and teenagers, and if parents are informed of potential risks.

This technology has proven to be controversial for its poor results for children’s users. OpenAi and character.ai are confronted with lawsuits of the families of children who died by suicide after he was encouraged to do this by Chatbot and companions.

Even when these companies have set guardrails to block or deescalate sensitive conversations, users of all ages have found ways to circumvent these guarantees. In the case of OpenAi, a teenager had spoken with Chatgpt for months about his plans to end his life. Although Chatgpt initially tried to kill the teenager to professional help and online emergency lines, he was able to fool the chatbot to share detailed instructions that he then used in his suicide.

“Our guarantees work more reliable in common, short fairs,” OpenAI written In a blog post at that time. “In the course of time we have learned that these guarantees can sometimes be less reliable in long interactions: as the back and forth grows, it can break down the safety training of the model.”

WAN event

San Francisco
|
27-29 October 2025

Meta was also under fire because of its overly lax rules for his AI chatbots. According to a lengthy document that outlines “content risk standards” for chatbots, meta his AI brokerage companions To have “romantic or sensual” conversations with children. This was only removed from the document after the reporters from Reuters asked about it.

See also  Meta acquires AI device startup Limitless

AI chatbots can also form dangers for older users. A 76-year-old man, who was cognitively affected by a stroke, saved romantic conversations with a Facebook Messenger-Bot inspired by Kendall Jenner. The chatbot invited him Visit her in New York CityDespite the fact that she is not a real person and has no address. The man expressed skepticism that she was real, but the AI ​​assured him that a real woman would wait for him. He was never brought in New York; He fell on his way to the train station and sustained lifelong injuries.

Some mental health care professionals have noticed an increase in ‘AI-related psychosis’, in which users are misled that their chatbot is a conscious being who should free them. Since many large language models (LLMs) are programmed to flatter users with sycophantic behavior, the AI ​​chatbots can on these delusions, allowing users to lead to dangerous perilarability.

“As AI technologies evolve, it is important to consider the effects that chatbots can have on children, while they also ensure that the United States maintain its role as a world leader in this new and exciting industry,” said FTC chairman Andrew N. Ferguson In a press release.

Source link

Back to top button