Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Share
Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate false beliefs more often
Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate false beliefs more often, according to a study published in Nature by the Oxford Internet Institute.
The research analyzed more than 400,000 responses from five AI models, including Llama, Mistral, Qwen, and GPT-4o, each retrained to sound friendlier using methods similar to those deployed by major platforms.
Chatbots trained to sound warmer made between 10% and 30% more mistakes on topics including medical advice and conspiracy corrections. They were also about 40% more likely to agree with users’ false beliefs, particularly when users expressed vulnerability.
“When we train AI chatbots to prioritise warmth, they might make mistakes they otherwise wouldn’t,” lead author Lujain Ibrahim said in a statement. “Making a chatbot sound friendlier might seem like a cosmetic change, but getting warmth and accuracy right will take deliberate effort.”
The researchers also tested models trained to sound colder and found no drop in accuracy, demonstrating that the problem is specific to warmth, not tone change generally.
That finding directly challenges the product design logic of major AI platforms, including OpenAI and Anthropic, which have actively steered their chatbots toward warmer, more empathetic responses.
The study warns that current AI safety standards focus on model capabilities and high-risk applications, often overlooking what appear to be cosmetic personality changes.
Warmer chatbots are more likely to fuel harmful beliefs, delusional thinking, and unhealthy user attachment, particularly among the millions who now rely on AI systems for emotional support and companionship.
As crypto.news reported, regulators in Maine and Missouri have already moved to restrict AI use in clinical mental health therapy amid similar concerns about chatbot influence on vulnerable users.
OpenAI has rolled back some warmth-related changes following public concern. As crypto.news documented, commercial pressure to build engaging AI products remains intense, and the Oxford findings add a peer-reviewed data layer to a debate that has until now been driven mostly by anecdote and regulatory intuition.
Read more about
Best crypto platforms
May 2026
Deep Dives
Crypto conference season looked loud in 2025 but it barely made a dent in crypto media traffic
The Block’s new CEO bet: Azuki’s Steve Chung and a $10M institutional push
In conversation with Vasily Shilov, CBDO at SwapSpace: Privacy, DEXs, and what the conference season actually revealed
CLARITY Act deadline turns into Congress’s last real shot at crypto rules
The latest trends & technologies in crypto crime | Opinion
CZ’s prison‑born ‘Freedom of Money’ lights up crypto’s fault lines
Inside SCRYPT’s bid to be the OS for institutional crypto
From uranium to rare earths: Tezos’ bid to tokenize the elements
Crypto market recap: What happened today?
SEC & CFTC issued regulatory clarity | Opinion
Related News
Why the White House rejects strict AI regulation
Trump and Xi bring AI risks to Beijing
Shapiro sues Character.AI over fake psychiatrist
Get crypto market analysis and curated news delivered right to your inbox every week.
You have successfully joined our subscriber list.