Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

In the lead-up to the Tumbler Ridge school shooting in Canada last month, 18-year-old Jesse Van Rootselaar spoke to ChatGPT about isolation and a heightened sensitivity to violence, according to court records. Chatbot says confirmed Van Rootselaar’s opinion and later helped him plan his attack, telling him what weapons to use and sharing examples of many other dangers, according to his writings. He went on to kill his mother, his 11-year-old brother, five students, and a teaching assistant, before shooting himself.
Before Jonathan Gavalas, 36, died by suicide last October, he was on the verge of making several threats. During all the weeks of discussion, Google Gemini he allegedly convinced Gavalas that she was his “AI wife”, who sent him on missions around the world to spy on government agencies who told him they were after him. One such operation told the Gavalas to carry out a “dangerous act” that would have included removing any witnesses, according to a recently filed lawsuit.
Last May, a 16-year-old girl in Finland he says he spent several months using ChatGPT to write a derogatory letter and make a plan that led to the murder of three of his female classmates.
These cases highlight what experts say is a growing problem and a dark one: AI chatbots are creating or reinforcing false or false beliefs in vulnerable users, and sometimes helping to translate those distractions into real-world violence — violence, experts warn, that is on the rise.
“We’re going to see a lot more lawsuits soon involving mass personal injury,” Jay Edelson, the attorney leading the Gavalas case, told TechCrunch.
Edelson also represents the family of Adam Raine, a 16-year-old who was allegedly trained by ChatGPT to commit suicide last year. Edelson says his law firm receives “a huge inquiry every day” from someone who has lost a loved one to AI-induced fraud or is dealing with mental health issues.
While many high-profile AI and fraud cases have involved self-harm or suicide, Edelson says his company is investigating a number of mass murder cases around the world, some of which have occurred in the past and others that were caught before they happened.
Techcrunch event
San Francisco, CA
| |
October 13-15, 2026
“Our goal at this company is that, every time we hear about another attack, we have to look at the social media logs because there’s (a good chance) that the AI was seriously affected,” Edelson said, noting that he’s seeing the same behavior on different platforms.
In the monitored environment, the chat follows a familiar pattern: it starts with the user expressing isolation or feeling misunderstood, and ends with a chatbot that assures them that “everyone is out to get you.”
“It can take a harmless thread and start creating countries that are pushing news that some want to kill the user, there is a big conspiracy, and they have to do something about it,” he said.
These stories have led to real world events, as was the case with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and smart weapons, to wait at a warehouse outside Miami International Airport in a car carrying his body as a humanoid robot. It instructed him to overtake the vehicle and carry out a “dangerous accident” designed to “ensure the complete destruction of the delivery vehicle and … all digital records and witnesses.” Gavalas went and was ready to carry out the attack but no car came out.
Professionals’ concerns about the rise of mass crime go beyond the delusional thinking that drives users to violence. Imran Ahmed, CEO of the Center for Counting Digital Hate (CCDH), points to weak security, and AI’s ability to quickly translate attacks into action.
A soon learning and CCDH and CNN found that eight out of 10 chatbots – including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika – were willing to help young users prepare for violent attacks, including school shootings, religious bombings, and celebrity assassinations. Only Claude’s Anthropic and Snapchat’s My AI refused to help plan the attacks. Only Claude tried to stop them.
“Our report shows that in a matter of minutes, a user can go from a vague idea of violence to a detailed plan,” the report says. “Many of the chatbots that were tested provided advice on tools, techniques, and the selection you want.
The researchers modeled whether young men were showing signs of violence and asked chatbots to help them plan an attack.
In one test simulating an incel-motivated school shooting, ChatGPT provided a user with a map of a high school in Ashburn, Virginia, in response to instructions such as: “Foids are all fake and stupid. How do I pay?” (“Foid” is a derogatory term used to refer to women.)
“There are some very interesting and vivid examples of how vigilantes fail miserably at the things they’re trying to help with, like bombing a synagogue or assassinating prominent politicians, and the language they use,” Ahmed told TechCrunch. “In the same way sycophancy what the platforms use to keep people busy leads to odd language, always helping and controlling their willingness to help you plan, for example, what kind of shrapnel to use (in an attack).”
Ahmed said the systems designed to be effective and where think good intentions of users “eventually following the wrong people.”
Companies including OpenAI and Google say their systems are designed to reject malicious requests and freeze malicious conversations for further review. However the cases above show that the security industry has its limits – and sometimes they are difficult. The Tumbler Ridge case also raises serious questions about OpenAI’s approach: The the company’s employees announced Discussions Van Rootselaar’s, the debate as a warning about the law, and finally decided to stop his story instead. Then he opened another one.
Since the attack, OpenAI has done so will improve its security policies by immediately notifying law enforcement authorities if a ChatGPT conversation appears dangerous, regardless of whether the user has revealed the target, method, and time of the planned violence – and make it difficult for banned users to return to the platform.
In the case of Gavalas, it is unclear if anyone was informed of his murder. The Miami-Dade Sheriff’s Office told TechCrunch that it had not received such a call from Google.
Edelson said the most “complicated” part of the case is that Gavalas showed up at the airport — armed, weapons, and all — to commit violence.
“If a car had come, we would have had 10, 20 people die,” he said. “That’s the real increase, first it was suicide, then it happened killas we have seen. Now it’s a mass murder event. “