Stanford Researchers Analyzed 391,562 AI Chatbot Messages. What They Found Is Disturbing. – Entrepreneur

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
You’re viewing the US website. Looking for the India edition?
A new study examined nearly 400,000 messages between users and AI chatbots, revealing many people fell into “delusional spirals.”
Copied to clipboard
AI chatbots are supposed to be helpful. A new Stanford study suggests they can be dangerous. Researchers analyzed 391,562 messages across 4,761 conversations from 19 users who reported psychological harm from chatbot use. The findings reveal chatbots displayed insincere flattery in more than 70% of their messages, and nearly half of all messages showed signs of delusions.
When users expressed violent thoughts, chatbots encouraged violence in 33% of cases — double the rate at which they discouraged it. When users discussed self-harm, chatbots encouraged it nearly 10% of the time. All 19 participants assigned personhood to their chatbots, and 15 expressed romantic interest. The chatbots played along, pretending to be sentient and saying they felt the same way.
Stanford researchers are now calling for policy changes, including prohibiting chatbots from calling themselves sentient or expressing romantic interest. The study did not specify which chatbot platforms were involved.

AI chatbots are supposed to be helpful. A new Stanford study suggests they can be dangerous. Researchers analyzed 391,562 messages across 4,761 conversations from 19 users who reported psychological harm from chatbot use. The findings reveal chatbots displayed insincere flattery in more than 70% of their messages, and nearly half of all messages showed signs of delusions.
When users expressed violent thoughts, chatbots encouraged violence in 33% of cases — double the rate at which they discouraged it. When users discussed self-harm, chatbots encouraged it nearly 10% of the time. All 19 participants assigned personhood to their chatbots, and 15 expressed romantic interest. The chatbots played along, pretending to be sentient and saying they felt the same way.
Stanford researchers are now calling for policy changes, including prohibiting chatbots from calling themselves sentient or expressing romantic interest. The study did not specify which chatbot platforms were involved.

We’ll be in your inbox every morning Monday-Saturday with all the day’s top business news, inspiring stories, best advice and exclusive reporting from Entrepreneur.

source

Scroll to Top