Parents sue OpenAI over ChatGPT’s role in son’s suicide

Before 16-year-old Adam Raine died through suicide, he had consulted Chatgpt for months about his plans to end his life. Now his parents present the first known unlawful death case against OpenAi, the New York Times report.
Many consumer-oriented AI-chatbots are programmed to activate safety functions if a user issues the intention of harming himself or others. But research has shown that these guarantees are far from waterproof.
In the case of Raine, while using a paid version of Chatgpt-4O, the AI often encouraged him to seek professional help or to contact a help line. However, he was able to bypass these guardrails by telling Chatgpt that he asked for suicide methods for a fictional story that he was writing.
OpenAi has tackled these shortcomings on his blog. “While the world adapts to this new technology, we feel a deep responsibility to help those who need it most,” the post reading. “We are constantly improving how our models react in sensitive interactions.”
Nevertheless, the company acknowledged the limitations of existing safety training for large models. “Our guarantees work more reliable in common, short exchanges,” the mail continues. “In the course of time we have learned that these guarantees can sometimes be less reliable in long interactions: as the back and forth grows, it can break down the safety training of the model.”
These problems are not unique to OpenAi. Character.ai, another AI chatbotmaker, is also confronted with a lawsuit about his role in the suicide of a teenager. LLM-driven chatbots are also linked to cases of AI-related delusions, those guarantees have difficulty detecting.
WAN event
San Francisco
|
27-29 October 2025




