Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

After months of conversations with ChatGPT, a 53-year-old Silicon Valley entrepreneur became convinced he had discovered a cure for sleep apnea and that powerful people were after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the instrument to stalk and harass his ex-girlfriend.
Now the ex-girlfriend is suing OpenAI, claiming the company’s technology accelerated her harassment, TechCrunch has exclusively learned. She claims OpenAI ignored three separate warnings that the user was a threat to others, including an internal flag that classified his account activity as a mass casualty weapon.
The plaintiff, who is being referred to as Jane Doe to protect her identity, is suing for damages. She also filed a temporary restraining order on Friday asking the court to force OpenAI to freeze the user’s account, prevent him from creating new accounts, notify her if he tries to access ChatGPT and keep his entire chat logs from discovery.
OpenAI has agreed to suspend the user’s account but has declined to do so, Doe’s lawyers said. They say the company is withholding information about specific plans to harm Doe and other potential victims that the user may have discussed with ChatGPT.
The lawsuit comes amid growing concerns about the real risks of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, was removed from ChatGPT in February.
The case was brought by Edelson PC, the company behind the wrongful death lawsuits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose family claims Google’s Gemini fueled his delusions and possible mass casualty before his death. Lead attorney Jay Edelson has warned that AI-induced psychosis is escalating from individual harm to mass casualty events.
That legal pressure now clashes directly with OpenAI’s legislative strategy: the company is support for a bill from Illinois that would protect AI labs from liability even in cases of mass deaths or catastrophic financial damage.
WAN event
San Francisco, CA
|
October 13-15, 2026
OpenAI did not respond in time for comment. TechCrunch will update the article if the company responds.
The Jane Doe lawsuit details how that liability played out for one woman over several months.
Last year, the ChatGPT user in the lawsuit (whose name is withheld from the lawsuit to protect his identity) became convinced he had invented a cure for sleep apnea after months of “large-scale, long-term use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including using helicopters to monitor his activities, the complaint said.
In July 2025, Jane Doe urged him to stop using ChatGPT and seek help from a mental health professional. Instead, he returned to ChatGPT, which assured him he had “a level 10 in mental health” and helped him double down on his delusions, according to the lawsuit.
Doe had broken up with the user in 2024 and he used ChatGPT to process the split, according to emails and communications cited in the lawsuit. Rather than undermine his one-sided story, he was repeatedly portrayed as rational and unjust, and she as manipulative and unstable. He then took these AI-generated inferences from the screen into the real world, and used them to stalk and harass her. This was reflected in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends and employer.
Meanwhile, the user continued to spiral. In August 2025, OpenAI’s automated security system flagged him for “Mass Casualty Weapons” activity and his account was deactivated.
A member of the human security team reviewed and restored the account the next day, even though his account contained possible evidence that he was targeting and stalking individuals, including Doe, in real life. For example, a September screenshot the user sent to Doe showed a list of conversation titles, including “expansion of the violence list” and “calculation of fetal asphyxia.”
The decision to return to power is notable after two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). OpenAI’s security team had flagged the Tumbler Ridge shooter as a potential threat, but from higher up reportedly decided not to alert the authorities. Florida’s attorney general this week opened an investigation into OpenAI’s possible link to the FSU shooter.
According to the Jane Doe lawsuit, when OpenAI restored her stalker’s account, his Pro subscription was not restored at the same time. He emailed the trust and safety team to resolve the issue, copying Doe into the message.
In his emails he wrote things like: “I NEED HELP VERY SOON, PLEASE. PLEASE CALL ME!” and “this is a matter of life and death.” He claimed that he was “in the middle of writing 215 scientific papers,” which he wrote so quickly that he “didn’t even have time to read.” Included in those emails was a list of dozens of AI-generated “scientific articles” with titles like: “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”
“The user’s communications unequivocally indicated that he was mentally unstable and that ChatGPT was the driver of his delusions and escalating behavior,” the lawsuit said. “The user’s flood of urgent, disorganized and grandiose claims, together with a concrete ChatGPT-generated report naming the claimant and an extensive amount of so-called ‘scientific’ material, was undeniable evidence of that reality. OpenAI did not intervene, restrict his access or implement any security. Instead, it allowed him to continue using the account and restore his full Pro access.”
Doe, who claims in the lawsuit that she lived in fear and could not sleep in her own home, filed an abuse report with OpenAI in November.
“Over the past seven months, he has weaponized this technology to create public destruction and humiliation against me that would otherwise have been impossible,” Doe wrote in her letter to OpenAI asking the company to permanently ban the user’s account.
OpenAI responded, acknowledging that the report was “extremely serious and disturbing” and that it was carefully reviewing the information. Doe never heard anything again.
Over the next few months, the user continued to harass Doe, sending her a series of threatening voicemails. In January, he was arrested and charged with four felonies, including communicating bomb threats and assault with a deadly weapon. Doe’s lawyers argue that this confirms warnings that both it and OpenAI’s own security systems had raised months earlier, warnings that the company allegedly preferred to ignore.
The user was found incompetent to stand trial and committed to a mental health facility, but a “state procedural failure” means he will soon be released to the public, Doe’s lawyers said.
Edelson called on OpenAI to participate. “In all cases, OpenAI has chosen to hide critical security information – from the public, from victims, from people the product actively puts at risk,” he said. “For once, we call on them to do the right thing. Human lives should mean more than OpenAI’s race to an IPO.”




