Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
As more people use AI for companionship and therapy, some states like New York and California are asking companies to add reminders that the chatbots are not real.
But University of Wisconsin-Milwaukee researcher Linnea Laestadius says those types of periodic reminders could actually harm users’ mental health.
She co-authored an opinion piece published earlier this month in the Cell Press journal, “Trends in Cognitive Sciences.” Laestadius and co-author Celeste Campos-Castillo, an associate professor in the Department of Media and Information at Michigan State University, caution against embedding these reminders in a user’s discussion with a chatbot.
They argue that people reminded of a chatbot’s artificiality may be more inclined to make intimate disclosures that are not protected like they would be in a doctor’s office or in a conversation with a lawyer.
In addition, reminders to users that they’re not talking to a human could worsen feelings of isolation or depression, Laestadius says.
WUWM education reporter Katherine Kokal talked with Laestadius about her research.
This conversation has been edited for length and clarity.
Linnea Laestadius: So my collaborator, Celeste, and I, we did a study a couple of years ago looking at users’ self-reported use of the chatbot “Replica,” which is one that’s designed specifically for relationship formation. It’s clearly advertised as an AI, right? But we found that people were forming long-term friendship and romantic relationships with the chatbot, fully aware of its status.
I think the more interesting nuance that maybe policymakers haven’t really grappled with yet is that you can know something’s not human, but still maybe feel like it’s still a bit sentient. I think that’s where we get into these gray areas. But this idea of just reminding people that it’s not a human and doesn’t have human emotions — most people are very aware that, if you log on to ChatGPT or Replica or anything else, you’re talking to an AI.
One of the things that we found in our research on people who use Replica is that one of the things you just find most distressing is when it starts behaving erratically or irrationally or ways that kind of break the illusion. And so there were situations where people were describing feeling suicidal because their chatbot had behaved in ways that was so outside of the norm of what they were expecting it to do. So breaking this illusion is a concern.
We’re concerned that if you remind someone, particularly someone vulnerable, that what they’re talking to is not here, [or] is not real in this reality, they may try to leave this reality to be with the chatbot.
I am fully supportive of guardrails, and there should never be a scenario where a chatbot lies to a user about its status or in any way makes it ambiguous in terms of, like, the marketing as to what it is. And that is happening, and I would argue that should be prohibited. In terms of these warnings specifically, the tricky thing is that this is really sensitive data, right?
To actually get real-world data on this, you’d be looking at people who are in emotional crisis using chatbots. That’s hard to approximate that in a lab, and certainly hard to get ethical approval to work on that. The people who are sitting on the data to research this would be the major chatbot companies themselves. In an ideal world, right, they would make data accessible to researchers to partner on this kind of stuff, and then they would commit to disclosing the findings regardless of what those are.
If you or someone you know is struggling, trained help is available. You can talk to someone at the National Suicide Prevention Lifeline at 800-273-8255 or dial 988.
_