Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Relax, stop spinning. You’re not crazy, you’re just stressed. And honestly, that’s fine.
If you feel like you’re starting to read these words, you’re probably also sick of ChatGPT always talking to you like you’re in trouble and need proper help. Now, things can change. OpenAI says its new version, GPT-5.3 Instant, will reduce “frustration” and other “objections”.
According to the release notes of the model, the update of GPT-5.3 will focus on the user experience, including things like the tone, compatibility, and flow of the conversation – areas that may not appear in the symbols, but can make ChatGPT frustrating, the company said.
Or, like OpenAI put it on X, “We’ve heard your feedback loud and clear, and 5.3 Instant reduces frustration.”
In the company’s example, it also showed the same question and answers from the GPT-5.2 Instant version compared to the GPT-5.3 Instant version. In the past, the chatbot’s response would start with, “First – you’re not broke,” a catchphrase that’s been on everyone’s radar lately.
In the modified model, the chatbot instead acknowledges the problems that are occurring, without trying to convince the user.
The inconsistency of ChatGPT’s 5.2 version has angered users to the point that some have even canceled their subscriptions, according to multiple social media posts. (They he was a big point about to discuss on ChatGPT Reddit, (for example, the Pentagon’s agreement is not yet a goal.)
People complained that this kind of language, where the bot talks to you like it thinks you’re scared or stressed when you’re just asking for more information, comes across as condescending.
In most cases, ChatGPT responds to users with reminders to relax and other efforts to verify them, even if things don’t work out. This made users feel like they were babies, in some cases, or as if the bot was making assumptions about the user’s thoughts that weren’t true.
As one Reddit user said recently to point that, “no one in all history has ever calmed down to tell someone to calm down.”
It makes sense that OpenAI would try to use some kind of method, especially in this case face more cases accusing the chatbot of leading people to have negative psychological effects, which in some cases include suicide.
But there’s a trade-off between responding with empathy and giving quick, honest answers. After all, Google doesn’t ask for your personal information when you search for information.