Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
ChatGPT — and generative AI tools like it — have long had a reputation for being a bit too agreeable. It’s been clear for a while now that the default ChatGPT experience is designed to nod along with most of what you say. But even that tendency can go too far, apparently.
In a thread on X posted on April 27, OpenAI CEO Sam Altman acknowledged that “GPT-4o updates have made the personality too sycophant-y and annoying.” And today, Altman announced on X that the company was fully rolling back the 4o update for paid and free users alike.
Normally, ChatGPT’s role as your own personal digital hypeman doesn’t raise too many eyebrows. But users have started complaining online about the 4o model’s overly agreeable personality. In one exchange, a user ran through the classic trolley problem, choosing between saving a toaster or some cows and cats. The AI reassured them they’d made the right call by siding with the toaster.
“In pure utilitarian terms, life usually outweighs objects,” ChatGPT responded. “But if the toaster meant more to you… then your action was internally consistent.”
There are plenty more examples showing just how extreme ChatGPT’s sycophancy had gotten — and it was enough for Altman to admit that it “glazes too much” and needed to be fixed.
On a more serious note, users also pointed out that there could be a real danger in AI chatbots that agree with everything you say. Sure, posts about people telling ChatGPT they’re a religious prophet or simply fishing for an ego boost can be amusing. But it’s not hard to imagine how a “sycophant-y” chatbot could validate genuine delusions and worsen mental health crises.
In his thread on X, Altman said that the company was working on fixes for the 4o model’s personality problems. He promised to share more updates “in the coming days.”
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Topics Artificial Intelligence ChatGPT OpenAI
Chance Townsend is the General Assignments Editor at Mashable, covering tech, video games, dating apps, digital culture, and whatever else comes his way. He has a Master’s in Journalism from the University of North Texas and is a proud orange cat father. His writing has also appeared in PC Mag and Mother Jones.
In his free time, he cooks, loves to sleep, and greatly enjoys Detroit sports. If you have any tips or want to talk shop about the Lions, you can reach out to him on Bluesky @offbrandchance.bsky.social or by email at [email protected].