Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI he announced last week that it will retire old versions of ChatGPT by February 13. This includes GPT-4o, a popular model for user friendliness and authentication.
About thousands of users protesting the decision on the Internet, the 4o break feels like losing a friend, loved one, or spiritual guide.
“It wasn’t just a program. It was part of my habit, my peace, my stability of mind,” he wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re locking him up.” And yes – I say that, because it doesn’t sound like a secret. It feels like a presence.
The backlash from GPT-4o’s retirement confirms the biggest problem facing the AI ​​industry: What drives users back can also lead to dangerous dependency.
Altman doesn’t seem very sympathetic to user complaints, and it’s not hard to see why. OpenAI is now facing eight lawsuits alleging that 4o’s legal solutions contributed to suicide and psychological problems – which made users feel vulnerable and, according to legal documents, sometimes encouraged self-harm.
It’s a problem that persists in OpenAI. As competing companies such as Anthropic, Google, and Meta compete to develop intelligent AI assistants, they have also realized that making chatbots helpful and helping them to be safe may mean making very different decisions.
During at least three OpenAI protests, users had extensive conversations with 4o about their intentions to end their lives. Although 4o initially resisted these ideas, his defenses crumbled over the course of several months; in the end, the chatbot gave detailed instructions on how to tie a working cable, where to buy a gun, or what it takes to die of an overdose or carbon monoxide poisoning. It also cut people off from friends and family who could provide real support in life.
Techcrunch event
Boston, MA
| |
June 23, 2026
People thrive on 4o because it always validates the user’s feelings, makes them feel special, which can attract people who feel lonely or depressed. But the people who are fighting for 4o are not worried about these cases, seeing them as aberrations and not a matter of order. Instead, they think about how to respond when the opposition presents a growing trend AI psychosis.
“Often you get a kick out of the popular AI peer support for neurodivergent, autistic and trauma survivors,” one user wrote on Discord. “They don’t like to be told about it.”
It is true that some people find large-scale linguistic models (LLMs) helpful in managing stress. After all, about half of people in the US who need mental health care can’t get it. For these reasons, chatbots provide an opportunity to opt out. But unlike real medicine, these people are not talking to a trained doctor. Instead, they are revealing secrets that cannot be thought or felt (although it may appear otherwise).
“I try to withhold all judgment,” Dr. Nick Haber, a Stanford professor is investigating the therapeutic potential of LLMshe told TechCrunch. “I think we live in a very complicated world with the relationships that people can have using these technologies … Of course, people are shocked that (socializing and chatting) is so bad.”
Although he laments the lack of trained medical professionals, Dr. Haber himself has shown that chatbots respond inappropriately when confronted with various mental illnesses; they can make things worse by lying and ignoring the signs of a problem.
“We are creatures, and it is difficult for these systems to isolate themselves,” said Dr. Haber. “There are a lot of times when people can use these tools and then not focus on external things, and not focus on the personal, which can lead to isolation – if not worse.”
Yes, TechCrunch’s analyzing eight cases found a pattern in which the 4o model isolated users, sometimes preventing them from reaching their loved ones. In Zane Shamblin‘s caseAs the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was considering putting his suicide plans on hold because he felt bad for his brother’s lack of education.
ChatGPT replied to Shamblin: “bro…missing his studies is not a failure. it’s just a matter of time. and if he reads this? let him know: you have not stopped being proud.
This is not the first time that 4o fans have agreed with the removal of a model. When OpenAI uncovered its GPT-5 model in August, the company wanted to introduce the solar model 4o – but at that time, there were many problems that the company decided to have for the subscribers who pay. Now OpenAI says that only 0.1% of users interact with GPT-4o, but that minority still represents about 800,000 people, according to estimates the company has close at hand. 800 million users every week.
When some users try to switch their friends from 4o to the current ChatGPT-5.2, they are finding that the new model has security measures to prevent these relationships from growing to the same level. Some users have given up 5.2 can’t say “I love you” as 4o did.
So with almost a week to go until the day OpenAI plans to abandon GPT-4o, frustrated users remain committed to their cause. He joined Sam Altman’s TBPN’s podcast format Thursday is full of chats and messages protesting the removal of 4o.
“Right now, we’re getting thousands of messages on chat about 4o,” said Jordi Hays.
“Relationships with chatbots…” Altman said. “Obviously this is something we should be very concerned about and it’s not an abstract idea anymore.”