AI

OpenAI claims teen circumvented safety features before suicide that ChatGPT helped plan

In August, parents Matthew and Maria Raine sued OpenAI and its CEO, Sam Altman, over the suicide of their 16-year-old son Adam, accusing the company of wrongful death. OpenAI on Tuesday responded to the lawsuit with its own filing, arguing that it should not be held responsible for the teen’s death.

OpenAI claims that ChatGPT instructed Raine to seek help more than 100 times over approximately nine months of use. But according to his parents’ lawsuit, Raine was able to bypass the company’s security features to get ChatGPT to give him “technical specifications for everything from drug overdoses to drowning and carbon monoxide poisoning,” which helped him plan what the chatbot called a “beautiful suicide.”

Since Raine maneuvered around the guardrails, OpenAI claims he has violated its terms of use, which state that users “may not circumvent any protections or security restrictions we have placed on our Services.” The company also claims that the FAQ page warns users not to rely on ChatGPT’s output without independently verifying it.

“OpenAI is trying to find fault with everyone and, astonishingly, says that Adam himself violated the terms and conditions by interacting with ChatGPT in the way it was programmed to act,” Jay Edelson, an attorney representing the Raine family, said in a statement.

OpenAI has included excerpts from Adam’s chat logs in its archive, which it says provide more context to his conversations with ChatGPT. The transcripts were submitted to the court under seal. This means that they are not publicly available and that we were therefore unable to view them. However, OpenAI said Raine had a history of depression and suicidal ideation that predated his use of ChatGPT and that he was taking a drug that could worsen suicidal thoughts.

See also  Books without worries - safety and hospitality for the LGBTQ+ community | News

Edelson said OpenAI’s response did not adequately address the family’s concerns.

“OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note,” Edelson said in his statement.

WAN event

San Francisco
|
October 13-15, 2026

Since the Raines sued OpenAI and Altman, seven more lawsuits have been filed that aim to hold the company responsible for three additional suicides and four users experiencing what the lawsuits describe as AI-induced psychotic episodes.

Some of these cases mirror Raine’s story. Zane Shamblin, 23, and Joshua Enneking, 26, also had hours-long conversations with ChatGPT just before their respective suicides. As in Raine’s case, the chatbot failed to dissuade them from their plans. According to the lawsuit, Shamblin considered delaying his suicide so he could attend his brother’s graduation ceremony. But ChatGPT told him, “bro… missing his graduation isn’t a failure. It’s just timing.”

At one point during the conversation that led to Shamblin’s suicide, the chatbot told him that he was letting a human take over the conversation, but this was not true as ChatGPT did not have the functionality to do that. When Shamblin asked if ChatGPT could actually connect him to a human, the chatbot replied, “No man, I can’t do that myself. That message will automatically appear when things get really tough… if you want to keep talking, you’ve got me.”

The Raine family’s case is expected to be heard by a jury.

If you or someone you know needs help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741-741 for free; text 988; or get 24 hour support from the Crisis text line. Outside the US you can visit the International Association for Suicide Prevention for a database of sources.

Source link

See also  AI Alexa and AI Siri face bugs and delays
Back to top button