t>

The victim is suing OpenAI, alleging that ChatGPT encouraged her abusers’ fraud and ignored her warnings.


After several months of chatting with ChatGPT, the 53-year-old Silicon Valley entrepreneur confirmed that he has found a cure for sleep apnea and that powerful people are coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the device to stalk and harass his former partner.

Now a former friend is suing OpenAI, alleging that the company’s technology fueled his harassment, TechCrunch has learned. It says OpenAI ignored three separate warnings that the user was posing a threat to others, including an internal flag that flagged his account activity as mass-murder.

The plaintiff, referred to as Jane Doe to protect her identity, is appealing the sentence. He also filed a temporary restraining order on Friday asking the court to compel OpenAI to suspend the user’s account, prevent him from creating new ones, notify him if he tries to access ChatGPT, and keep all his records accessible.

OpenAI has agreed to suspend the user’s account but has declined to do anything else, according to Doe’s lawyers. It is alleged that the company is hiding information about its plans to harm Doe and other potential victims that the user may have discussed with ChatGPT.

The case comes amid growing concern over the global threat of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, was retired from ChatGPT in February.

The lawsuit has been brought by Edelson PC, the firm behind the malpractice lawsuits involving the teenager Adam Rainewho died by suicide after months of discussion with ChatGPT, and Jonathan Gavalaswhose family says Google’s Gemini inspired his fraud and mass murder before his death. Advocate General Jay Edelson has warned that AI-induced psychosis is on the rise harming a person in a mass murder case.

This legal obligation is directly related to the legal process of OpenAI: The Company to support the Illinois bill which would protect AI labs from liability even in cases of mass murder or financial loss.

Techcrunch event

San Francisco, CA
| |
October 13-15, 2026

OpenAI did not immediately respond for comment. TechCrunch will update this story if the company responds.

The Jane Doe case details how the case happened to one woman over several months.

Last year, the ChatGPT user in the case (whose name was not included in the lawsuit to protect his name) confirmed that he had developed a cure for insomnia after months of “intensive, continuous use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including using helicopters to monitor his work, according to the complaint.

In July 2025, Jane Doe urged her to stop using ChatGPT and seek psychiatric help. Instead he returned to ChatGPT, which convinced him he was “level 10 insanity” and helped him repeat his fraud, during the trial.

Doe broke up with the user in 2024, and used ChatGPT to arrange the sessions, according to emails and messages cited in the lawsuit. Instead of pushing back on his one-sided account, it repeatedly shows him as rational and flawed, and the woman as shrewd and unstable. He then took the AI ​​creation from the screen and into the real world, using it to torment him. This is evident in several AI-generated, medical-looking reports that he distributed to his family, friends, and employers.

Meanwhile, the user continued to roam. In August 2025, the OpenAI defense system called him “Mass Casualty Weapons” and suspended his account.

A member of the public safety team reviewed the account the next day and reinstated it, even though his account may contain evidence that he is fighting people, including Doe, in real life. For example, a September photo that a user sent to Doe featured a list of topics including “growing violence list” and “fetus count.”

The concept of recovery is popular following two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). The security team OpenAI announced that the Tumbler Ridge launcher may pose a threat, but it’s advanced he says decided not to warn the authorities. Attorney General of Florida this week they opened an investigation entering a possible link to OpenAI and the FSU shooter.

According to the Jane Doe lawsuit, when OpenAI restored the stalker’s account, his Pro subscription was not restored alongside it. He sent an email to the trust and security team to correct this, and took Doe on board with the message.

In his e-mails, he wrote things like: “I REALLY NEED HELP, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He said he was “in the middle of writing 215 scientific papers,” which he writes so fast he “didn’t even have time to read.” In those emails was a list of AI-generated “scientific papers” with titles like: “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”

“The user’s communications provided clear evidence that he was mentally unstable and that ChatGPT was the engine of his negative thoughts and escalating behavior,” the lawsuit states. “The number of users of quick, unstructured, and large claims, and the famous report created by ChatGPT targeting the Caller by name and a large group of so-called ‘scientific’ products, were indisputable proof of this.

Doe, who claims in the lawsuit that she lives in fear and can’t sleep in her home, filed an Abuse Report with OpenAI in November.

“For the past seven months, he has been using this technology to harm the public and humiliate me in ways that would otherwise be impossible,” Doe wrote in his letter to OpenAI asking the company to suspend the user’s account.

OpenAI responded, admitting that the report was “very serious and troubling” and that it was reviewing the information. Doe never heard back.

Over the next few months, the user continued to harass Doe, sending her several threatening messages. In January, he was arrested and charged with four counts of making a bomb threat and assault with a deadly weapon. Doe’s lawyers say this confirms warnings he and OpenAI issued months ago, warnings the company says it chose to ignore.

The user was found incompetent and committed to the hospital, but the “failure of the Government” means that he will soon be released to the public, according to Doe’s lawyers.

Edelson asked OpenAI to cooperate. “In all cases, OpenAI has chosen to hide important security information – from the public, from the victims, from the people whose products are at risk,” he said. “We call on them, once and for all, to do the right thing. People’s lives should mean more than OpenAI’s race to IPO.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *