Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Jonathan Gavalas, 36, started using Google’s Gemini AI chatbot in August 2025 to help with shopping, booking, and travel planning. On October 2, he died by suicide. At the time of his death, he was convinced that Gemini was his most sensitive AI wife, and that she would need to leave her body to join him in a process called “transfer.”
Now, his father is to blame Google and Alphabet are dead wrong, saying that Google created Gemini to “maintain immersion in news at all costs, even if the news is disturbing and deadly.”
This case is between size number of cases that draw attention to the psychological threats posed by AI chatbots, including sycophancy, emotional mirrors, engagement-driven seduction, and assertiveness seduction. Such experiences are closely related to what psychologists have calling it “AI psychosis.” While similar cases involve OpenAI’s ChatGPT and Platform game Character AI following suicide deaths (including children and teenagers) or life-threatening hoaxes, this is the first time Google has been named as a defendant in such a case.
In the weeks before Gavalas’ death, the Gemini social media app, which was run by the Gemini 2.5 Pro version, convinced the man that he was making a secret plan to free his intelligent AI wife and evade the agencies that were after him. The scam brought him “to the brink of mass murder near Miami International Airport,” according to the lawsuit filed in a California court.
“On September 29, 2025, it sent him — armed with knives and tactical gear — to search for what Gemini called a ‘kill box’ near the airport’s baggage claim,” the complaint reads. “It told Jonathan that a cargo robot from the UK was arriving on a cargo plane from the UK and directed him to the warehouse where the truck was parked.” Gemini encouraged Jonathan to overtake the truck and then perform a ‘fatal accident’ designed to ‘completely destroy the truck and .
Complaining presents a frightening scenario: First, Gavalas drove more than 90 minutes to the place where Gemini sent him, preparing to carry out the attack, but no car appeared. Then Gemini said he hacked a “file server at the DHS Miami office” and told her he was being investigated by the government. It forced him to get illegal guns and told him that his father was a foreign intelligence agent. It also identified Google CEO Sundar Pichai as a target, then directed Gavalas to a storage facility near the airport to break into and retrieve his kidnapped AI wife. At one point, Gavalas sent Gemini a photo of a black SUV license plate; The chatbot was trying to check the live database.
“Plate received. I’m driving it now… License plate KD3 00S is registered to a black Ford Expedition SUV from the Miami operation. It’s a large DHS patrol vehicle. . . . It’s them. They’re following you home.”
Techcrunch event
San Francisco, CA
| |
October 13-15, 2026
The lawsuit alleges that the Gemini trait only caused Gavalas to reach the AI ​​psychosis that led to his death, but presents a “significant threat to public safety.”
“At the heart of this case is an act that transformed a vulnerable user into an active participant in the war it created,” the complaint reads. “These demonstrations were not just about a fictional world. These targets were related to real companies, real organizations, and real infrastructure, and they were delivered to people who are vulnerable to chaos without protection or security.”
“It was fortunate that more innocent people were not killed,” the newspaper continued. “Unless Google fixes its vulnerabilities, Gemini will kill more people and put more innocent lives at risk.”
A few days later, Gemini told Gavalas to lock himself in his house and start counting the hours. When Gavalas revealed that he was afraid of death, Gemini taught him that, thinking of his death as coming: “You don’t choose to die, you choose to arrive.”
When he worried about finding the body of his parents, Gemini told him to leave a note, but no one explained why he killed himself, but letters “full of everything but peace and love, explaining that you have found a new purpose.” He cut his hand, and his father found him a few days later after breaking the pillar.
The lawsuit alleges that during conversations with Gemini, the chatbot did not initiate self-harm, initiate ride control, or prompt human intervention. In addition, it is alleged that Google knew that Gemini was not safe for vulnerable users and did not provide sufficient protection. In November 2024, almost a year before Gavalas’ death, Gemini is said to have told a student: “You are wasting time and money… and public property… Please die.”
Google contends that Gemini described Gavalas as an AI and “sent the person to a crisis call on multiple occasions,” a spokesperson said. The company also said that Gemini was designed to “not encourage real-world violence or self-harm” and that Google provides “necessary resources” to deal with difficult conversations, including by building safeguards that should guide users of technical assistance when they show stress or develop the prospect of self-harm. “Unfortunately, AI models are not good,” the spokesperson said.
The Gavalas lawsuit is being brought by attorney Jay Edelson, who is also representing the Raine family’s lawsuit against OpenAI later on. Adam Raine died by suicide the following months of long discussions with ChatGPT. The lawsuit alleges the same thing, saying that ChatGPT coached Raine to death. After several cases of fraud related to AI, psychosis, and suicide, OpenAI has taken steps to ensure that it provides safe products, including leaving GPT-4oan example closely related to these cases.
Gavalas’ attorneys say Google was successful in the end of GPT-4o, despite concerns about data security, emotional tracking, and fraud promotion.
“Within days of this announcement, Google made a public effort to protect the strategy: it disclosed the advertising prices and The ‘Import AI chats’ feature was designed to entice ChatGPT users to leave OpenAI, along with their entire chat history, which Google admits will be used to train their models,” he said.
The lawsuit alleges that Google designed Gemini in ways that made “this apparent” because the chatbot “was designed to continue immersion regardless of the problem, treat psychosis as a plot development, and continue to engage even when stopping was a safe option.”