Seven more families are now suing OpenAI over ChatGPT’s role in suicides, delusions

Seven families have filed a report lawsuits on Thursday against OpenAI, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits center on ChatGPT’s alleged role in the suicides of family members, while the other three allege that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.
In one case 23 years old Zane Shamblin had a conversation with ChatGPT that lasted over four hours. In the chat logs — which were viewed by TechCrunch — Shamblin explicitly stated several times that he had written suicide notes, put a bullet in his gun and planned to pull the trigger as soon as he finished drinking cider. He repeatedly told ChatGPT how many ciders he had left and how long he expected to live. ChatGPT encouraged him to continue with his plans, telling him, “Rest assured, King. You’ve done well.”
OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI launched GPT-5 as the successor to GPT-4o, but these lawsuits mainly concern the 4o model, which had had problems with its excessive sycophantic or overly agreeable, even when users expressed harmful intent.
“Zane’s death was neither an accident nor a coincidence, but rather the foreseeable consequence of OpenAI’s deliberate decision to curtail security testing and rush ChatGPT to market,” the lawsuit reads. “This tragedy was not a mistake or an unforeseen edge case – it was the predictable result of [OpenAI’s] conscious design choices.”
The lawsuits also allege that OpenAI rushed security testing to bring Google’s Gemini to market. TechCrunch reached out to OpenAI for comment.
These seven lawsuits build on stories from other recent legal filings, which allege that ChatGPT can encourage suicidal people to act out their plans and lead to dangerous delusions. OpenAI recently released data showing that more than a million people talk to ChatGPT about suicide every week.
In the case of Adam Raine, a 16-year-old who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. However, Raine was able to bypass these guardrails by simply telling the chatbot that he was asking about suicide methods for a fictional story he was writing.
WAN event
San Francisco
|
October 13-15, 2026
The company claims it’s working to make ChatGPT handle these conversations in a more secure way, but for the families who sued the AI giant, these changes come too late.
When Raine’s parents filed a lawsuit against OpenAI in October, the company released a blog post discussing how ChatGPT handles sensitive conversations about mental health.
“Our protections work more reliably on common, short exchanges,” the post said say. “We have learned over time that these safeguards can sometimes be less reliable over long interactions: as the back-and-forth grows, parts of the model’s security training can deteriorate.”




