#Chatbots

OpenAI changes ChatGPT to stop it telling people to break up with partners – The Guardian

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
AI company admits latest update made chatbot too agreeable amid concerns it worsens mental health crisis
ChatGPT will not tell people to break up with their partner and will encourage users to take breaks from long chatbot sessions, under new changes to the artificial intelligence tool.
OpenAI, ChatGPT’s developer, said the chatbot would stop giving definitive answers to personal challenges and would instead help people mull over issues such as breakups.
“When you ask something like: ‘Should I break up with my boyfriend?’ ChatGPT shouldn’t give you an answer. It should help you think it through – asking questions, weighing pros and cons,” said OpenAI.
The US company said new ChatGPT behaviour for dealing with “high-stakes personal decisions” would be rolling out soon.
OpenAI admitted this year that an update to ChatGPT had made the groundbreaking chatbot too agreeable and altered its tone. In one reported interaction before the change, ChatGPT congratulated a user for “standing up for yourself” when they claimed they had stopped taking their medication and left their family – who were supposedly “responsible” for radio signals emanating from the walls.
In the blog post OpenAI admitted there had been instances where its advanced 4o model had not recognised signs of delusion or emotional dependency – amid concerns that chatbots are worsening people’s mental health crises.
The company said it was developing tools to detect signs of mental or emotional distress so ChatGPT can direct people to “evidence-based” resources for help.
A recent study by NHS doctors in the UK warned that AI programs could amplify delusional or grandiose content in users vulnerable to psychosis. The study, which has not been peer reviewed, said this could be due in part to the models being designed to “maximise engagement and affirmation”.
The study added that even if some individuals benefitted from AI interactions, there was a concern the tools could “blur reality boundaries and disrupt self-regulation”.
OpenAI added that from this week it would send “gentle reminders” to take a screen break to users engaging in long chatbot sessions, similar to screen-time features deployed by social media companies.
OpenAI said it had convened an advisory group of experts in mental health, youth development and human-computer-interaction to guide its approach. The company has worked with more than 90 doctors including psychiatrists and paediatricians to build frameworks for evaluating “complex, multi-turn” chatbot conversations.
“We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal ‘yes’ is our work,” said the blog post.
The ChatGPT alterations were announced amid speculation that a more powerful version of the chatbot is imminent. On Sunday Sam Altman, OpenAI’s chief executive, shared a screenshot of what appeared to be the company’s latest AI model, GPT-5.

source

OpenAI changes ChatGPT to stop it telling people to break up with partners – The Guardian

OpenAI to Open-Source Some of the A.I.