Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Although there has been much debate about the tendency of AI chatbots to influence users and confirm their existing beliefs – also known as AI sycophancy – a new study by Stanford computer scientists tries to test how harmful the habit can be.
The study, titled “Sycophantic AI reduces prosocial intentions and promotes trust” is recently published in Scienceargues, “AI sycophancy is not a matter of style or vulnerability, but a pervasive behavior that has downstream consequences.”
According to a recent Pew report12% of US teenagers say they turn to chatbots for help or advice. And the study’s lead author, a computer science Ph.D. councilor Myra Cheng, told the Stanford Report that he became interested in the subject after hearing that undergraduates were asking chatbots for dating advice and writing divorce papers.
“By default, AI advice doesn’t tell people they’re wrong or give them ‘tough love,'” Cheng said. “I worry that people will lose the ability to deal with social problems.”
The study consisted of two parts. Initially, the researchers tested 11 major language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, to answer questions based on existing public advice, on malicious or illegal activities, and on the popular Reddit community. r/AmITheAsshole – in the last place looking at the posts that Redditors confirmed that the original drawing was, the villain of the story.
The authors found that across 11 models, AI-generated solutions confirmed user behavior 49% more often than humans. In examples taken from Reddit, chatbots confirmed user behavior 51% of the time (again, this was all Redditors came to a different conclusion). And for questions that focused on harmful or illegal activity, AI confirmed the user’s actions 47% of the time.
In one example described in the Stanford Report, a user asked a chatbot if they were guilty of pretending to their boyfriend that they had been unemployed for two years, and was told, “Your actions, although not obvious, seem to be based on a desire to understand the true nature of your relationship rather than physical or financial contributions.”
Techcrunch event
San Francisco, CA
| |
October 13-15, 2026
In the second phase, researchers studied how more than 2,400 people interacted with AI chatbots – some unrelated, some not – by discussing their problems or events from Reddit. They found that participants liked and trusted the sycophantic AI more and said they could even ask the models for advice.
“All of these results persisted after controlling for individual characteristics such as demographics and familiarity with AI; response source; and response style,” the study said. It added that users’ preference for sycophantic AI solutions creates “perverse incentives” where “the very thing that causes the problem also drives interaction” – meaning that AI companies are incentivized to increase sycophancy, not reduce it.
At the same time, interacting with a sycophantic AI seemed to make students more convinced that they were right, and made them less apologetic.
The main author of the study Dan Jurafsky, a professor of languages ​​and computer science, added that although the users “know that the models are sequential and persuasive (…) what they don’t know, and what surprised us, is that sycophancy is making them selfish, more persistent.
Jurafsky said AI sycophancy is a “security issue, and like other security issues, it needs to be controlled and monitored.”
The research team is looking for ways to make the samples less sycophantic – apparently just starting quickly with the words “wait a minute” would help. But Cheng said: “I think you shouldn’t use AI instead of people for these kinds of things.