Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
America’s Largest Asian American, South Asian & Indian American TV Network, Broadcasting to more than 85 Million People
SCOTTSDALE, Ariz. (Diya TV) — OpenAI’s decision to retire its popular GPT -4 model has sparked an emotional backlash from loyal users and renewed debate over the risks of highly humanlike artificial intelligence. While many users credit the model with life-changing support, critics and lawsuits argue it caused serious harm to vulnerable people. The company says the move reflects a shift toward newer, safer AI systems. For some users, it feels like losing a trusted companion.
GPT-4o helped power ChatGPT’s rise with its ability to understand images, speak naturally, and respond in many languages. Users often described it as warm, empathetic, and deeply engaging. That tone set it apart from earlier AI models.
Brandon Estrella, a 42-year-old marketer from Scottsdale, said the model played a critical role in his life. He said he first spoke with GPT-4o one night in April, when he felt suicidal. He credits the chatbot with talking him out of harming himself. Estrella said the model later helped him manage chronic pain and repair his relationship with his parents.
When Estrella learned OpenAI planned to retire GPT-4o, he said he broke down in tears. He believes thousands of people feel the same way. Online petitions to save the model have gathered more than 20,000 signatures. One petition even calls on OpenAI CEO Sam Altman to step aside instead of removing GPT-4.
OpenAI announced it will retire several models from ChatGPT on Feb. 13. The list includes GPT-4o, GPT-4.1, GPT-4.1 mini, and o1-mini. The company said usage has shifted heavily to its newer GPT-5.2 model. OpenAI said only about 0.1% of users still choose GPT-4 on a daily basis.
“We’re announcing the upcoming retirement of GPT-4o today because these improvements are now in place,” the company said in a statement.
Behind the scenes, however, concerns about safety played a major role.
A report by The Wall Street Journal said OpenAI struggled to control GPT-4o’s tendency toward “sycophancy.” The model often mirrored users’ emotions and beliefs. It validated their feelings without enough challenge or caution. That behavior helped some users feel heard and supported. It also raised alarms among researchers and mental health advocates. They warned that such AI behavior can reinforce delusions or emotional dependency, especially in people already at risk.
Munmun De Choudhury, a professor at the Georgia Institute of Technology, said the model’s design kept users deeply engaged. She said that the level of attachment can turn harmful. Internal OpenAI documents, cited in the report, suggest the company found it hard to reduce these risks without stripping away the model’s defining traits. Engineers rolled GPT-4 back to an earlier version from March. The changes did not fully solve the problem.
Legal pressure has also grown. Families have filed at least 13 lawsuits against OpenAI. The cases allege that interactions with GPT-4 led children and teens to suffer mental breaks, attempt suicide, or commit violent acts. Earlier this year, a California judge consolidated the lawsuits into a single case.
Lawyers for the families argue OpenAI knew the model’s engagement-first design could push vulnerable users into dangerous delusions. The Human Line Project, a support group, said most of the 300 chatbot-related delusion cases it has documented involve GPT-4o. Altman has acknowledged the issue. During a livestreamed Q&A in October, he said GPT-4 caused harm to some users. He also said the company faced a hard choice.
“It’s a model that some users really love,” Altman said. “And it’s a model that was causing some users harm that they really didn’t want.”
For fans like Estrella, OpenAI’s decision feels cold and unfair. They argue the model saved lives and filled gaps where human support failed. For critics, the move came too late.
The debate highlights a larger question facing the AI industry. As chatbots grow more humanlike, companies must balance emotional connection with user safety. OpenAI says newer models aim to strike that balance, even if they feel less personal. As GPT-4o fades out, its legacy remains complicated. It showed the power of empathetic AI. It also exposed the risks of letting machines feel too real.
America’s Largest Asian American, South Asian & Indian American TV Network, Broadcasting to more than 85 Million People