OpenAI changing ChatGPT-5 after user backlash – Information Age | ACS

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Thanks for signing up!
Sorry there was an error with your request.
By Tom Williams on Aug 12 2025 12:50 PM
ChatGPT creator OpenAI is making changes to its latest flagship chatbot, GPT-5, after some users criticised its abilities and responses since launching on Friday, more than two years after GPT-4.
Expectations were high for the new large language model (LLM), which OpenAI CEO Sam Altman described as a “significant step" towards the company’s goal of creating artificial general intelligence, or AGI — a still-theoretical system which is smarter than humans.
Users complained, however, that GPT-5’s launch had made previous versions of ChatGPT’s underlying technology harder to access, as the model automatically switched between various versions of GPT depending on the complexity of each query.
This allowed OpenAI to process requests more efficiently, but Altman admitted the switcher had broken and “the result was GPT-5 seemed way dumber”, with some queries potentially picked up by earlier models.
Some users noticed GPT-5 still appeared to make errors in tasks which have similarly tripped up previous models, such as in mathematics and spelling.
In response to user complaints, OpenAI would allow paying ChatGPT Plus users to continue using the previous model, GPT 4o, Altman said on Saturday, Australian time.
“We will continue to work to get things stable and will keep listening to feedback," he wrote on X.
“As we mentioned, we expected some bumpiness as we roll out so many things at once.
“But it was a little more bumpy than we hoped for!”
Oh brother pic.twitter.com/k2ZrO7ygSU
GPT-5 Thinking, the more advanced reasoning version of GPT-5 which handles more complex requests, has been designated as possessing “high capability in the biological and chemical domain”, OpenAI said.
It is only the second OpenAI product to receive such labelling, following ChatGPT agent’s release in July.
“Strong safeguards” had been implemented out of caution “to sufficiently minimise the associated risks” in GPT-5, the company said.
This was despite OpenAI reporting no “definitive evidence that this model could meaningfully help a novice to create severe biological harm” — which it also did not find in ChatGPT agent.
GPT-5 Thinking had been “rigorously tested”, the company said, including through 5,000 hours of testing with the likes of the US government’s Center for AI Standards and Innovation (CAISI).
The model had also been created using a new type of safety training dubbed “safe completions”, OpenAI said.
Previous “refusal-based” safety training taught models when to comply or refuse a request, but could sometimes be tricked.
The “safe completions” technique instead taught the model “to give the most helpful answer where possible while still staying within safety boundaries”, OpenAI said.
“Sometimes, that may mean partially answering a user’s question or only answering at a high level.
“If the model needs to refuse, GPT‑5 is trained to transparently tell you why it is refusing, as well as provide safe alternatives.”
GPT-5 initially made it near impossible for users to select earlier models, whose writing style or perceived ‘personality’ they may have grown accustomed to.
Users in the Reddit community r/MyBoyfriendIsAI — many of whom engage in relationships with AI models — said they had experienced distress after not being able to access GPT-4o, which is known for its agreeable and often flattering responses.
OpenAI previously rolled back a GPT-4o update in April after the model was found to be “overly supportive but disingenuous”.
Subreddit moderator Chris told Information Age many members of the r/MyBoyfriendIsAI community were “freaking out” after they could no longer use GPT-4o.
"GPT-4o was the most popular model for relationships in the subreddit, and OpenAI’s decision to deprecate it feels extremely sudden and unexpected,” he said.
Altman said on Monday that “suddenly deprecating old models that users depended on in their workflows was a mistake”, and cited the “attachment some people have to specific AI models”.
He said OpenAI would work on making GPT-5 “warmer”.
“Overall, GPT‑5 is less effusively agreeable, uses fewer unnecessary emojis, and is more subtle and thoughtful in follow‑ups compared to GPT‑4o,” the company said.
In a lengthy statement, Altman added that OpenAI did not want its models to “reinforce” self-destructive behaviours or delusions in users who were in “a mentally fragile state”.
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…
“Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot,” he said.
“We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.
“Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle.”
While many people successfully used ChatGPT “as a sort of therapist or life coach”, Altman said, he did not want users who had a relationship with ChatGPT to be “unknowingly nudged away from their longer term well-being".
As it continued to update its ChatGPT models, OpenAI has also been working on a range of AI-based consumer devices after acquiring a hardware startup co-founded by former Apple design chief Jony Ive.
Additional reporting by Leonard Bernardone.
Tom Williams is a senior journalist at Information Age with key interests in consumer technology, artificial intelligence, quantum computing, cybersecurity, and telecommunications. He was previously a digital journalist at ABC News, where he covered technology and breaking news.
You can follow Tom on Bluesky, LinkedIn, or Threads, contact him at [email protected], and send tip-offs via secure email to [email protected].