#Chatbots

Chatbot therapy? Available 24/7 but users beware | The Excerpt – USA Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
On a special episode (first released on July 3, 2025) of The Excerpt podcast: Chatbots are sometimes posing as therapists—but are they helping or causing harm? Psychologist Vaile Wright shares her thoughts.
Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Dana Taylor:
Hello, I’m Dana Taylor, and this is a special episode of The Excerpt. The proliferation of chatbots has people using them in a myriad of ways. Some see them as friends and confidants, as Meta CEO Mark Zuckerberg has suggested. And in certain cases, even as therapists. And actual therapists are expressing concern. Therapy is a licensed profession for many good reasons.
Notably, some chatbots have wandered into dangerous territory, allegedly suggesting that a user kill themselves and even telling them how they could do it. The American Psychological Association has responded by asking the Federal Trade Commission to start investigating chatbots that claim to be mental health professionals. Still, with mental health a rising issue and loneliness and epidemic, could bots help with the lack of supply with proper oversight or warnings?
Vaile Wright, Senior Director of Healthcare Innovation at the American Psychological Association, is here to unpack what’s happening for human therapists as they fight an onslaught of AI therapy impersonators. Vaile, thank you for joining me.
Vaile Wright:
Thanks so much for having me.
Dana Taylor:
Can you set the stage here? Your organization’s chief executive cited two court cases when he presented to a Federal Trade Commission panel about the concerns of professional psychologists. What are the real life harms he pointed to?
Vaile Wright:
I think we see a future where you’re going to have AI mental health chatbots that are rooted in psychological science, have been rigorously tested or co-created with experts for the purpose of addressing mental health needs. But that’s not what’s currently available on the market. What is available are these chatbots that click none of those boxes, but are being used by people to address their mental well-being.
And the challenge is that because these AI chatbots are not being monitored by humans who know what good mental health care is, they go rogue and they say very harmful things. And people have a tendency to have an automation bias, and so they trust the technology over their own gut.
Dana Taylor:
What do these cases show about what could occur when AI chatbots moonlight as licensed therapists?
Vaile Wright:
When these chatbots refer to themselves as psychologists or therapists, they are presenting a certain level of credibility that doesn’t actually exist. There is no expert behind these chatbots offering what we know is good psychological science. Instead, where the expertise lies is actually on the back end, where these chatbots are developed by coders to be overly validating to just tell the person exactly what they want to hear and be appealing to the point of almost being sycophantic.
And that’s the opposite of what therapy is. Yes, I want to validate as a therapist, but I’m also there to help point out when you’re engaging in unhelpful thinking or behaviors, and these chatbots just don’t do that. They, in fact, encourage some of that unhelpful, unhealthy behavior.
Dana Taylor:
Experts have described AI-powered chatbots as simply following patterns, and there’s been conversation around chatbots telling users what they want to hear, being overly complimentary, as you’ve said. At worst, the response can be downright dangerous, like encouraging illicit drug use or as I mentioned in the intro, encouraging someone to take their own lives and then suggesting how they do that. Given all that, what are some of the regulations that professionals in your community would like to see? Is there a way for chatbots to responsibly help with therapy?
Vaile Wright:
I think that there is a way for chatbots to responsibly help with therapy. In certain cases, I think at a very minimum, these chatbots should not be allowed to refer to themselves as a licensed professional, not just as a licensed psychologist. We wouldn’t want them to present themselves as a licensed attorney or a licensed CPA and offering advice. So I think that’s at a minimum. I think we need more disclaimers that these are not humans.
I think just saying it once to a consumer is just not sufficient. I think that we need some surveillance of the types of chats that’s happening, particularly having to report out by these companies when they’re noticing harmful discussions around suicidal ideation or suicidal behavior or violence of that type. So I think there are a variety of different things that we could see happening, but we need probably some regulatory body to insist that these companies do it.
Dana Taylor:
Are there any other protections proposed by the AI companies themselves that you see as having merit?
Vaile Wright:
I think because of this increased attention on how these chatbots are operating, you are seeing some changes around it, maybe age verification or offering resources like 911 or 988 pop up when they detect something that maybe is unhelpful, but I think they need to go even further.
Dana Taylor:
For young people in particular when using a chatbot, it can be difficult to recognize that they’re dealing with a chatbot to begin with. Will it continue to get more difficult as the tech evolves, and does that mean it could be more dangerous for young people in the years to come?
Vaile Wright:
It’s clear that the technology is getting more and more sophisticated, and it is really challenging I think for everybody to really be able to tell that these are not humans. They are built to sound and respond like humans. And with younger people who maybe are just more emotionally vulnerable, are also not as developmentally long in terms of their cognition and their, again, sense of being able to listen to your own gut, I do get worried that these digital natives, who have been interacting seamlessly with technology since the beginning, are just not going to be able to discern when the technology is going rogue or being truly harmful.
Dana Taylor:
Vaile, depending on where a patient lives or for other reasons, there can be a long wait list to see a therapist. Are there are some benefits that a bot can provide due to the fact that it’s not human and is virtually available 24/7?
Vaile Wright:
Again, I think bots that are going to be developed for these purposes can be immensely helpful. And in fact, some of the bots that currently exist we do know anecdotally have had benefits. So for example, if it’s 2:00 in the morning and I’m experiencing distress, even if I had a therapist, I can’t call them at 2:00 in the morning. But if I had a chatbot that could provide me with some support, maybe encourage some strong healthy coping skills, I do see some benefit in that.
We’ve also heard from the neurodivergent community that these chatbots provide them an opportunity to practice their social skills. So I think knowing that these can have some benefit, how do we capitalize on ensuring that whatever emerging technologies we build and offer are safe and effective because we can’t just keep doing therapy with one model.
We can’t expect everybody to be able to see a face-to-face individual on a weekly basis because the supply is just too insufficient. So we have to think outside the box.
Dana Taylor:
Are you aware of human therapists that are joining forces today with chatbots to meet this overwhelming need for therapy?
Vaile Wright:
Yeah. Subject matter experts, whether it’s psychologists or other therapists, play a critical role in ensuring that these technologies are safe and effective. There was a new study that came out of Dartmouth recently that looked at a mental health therapy chatbot called Therabot that, again, showed some really strong outcomes in improving depression, anxiety, and eating disorders. And that’s an example of how you bring the researchers and the technologists together to develop products that are safe, effective, responsible, and ethical.
Dana Taylor:
Some high school counselors are providing chatbots to answer students’ questions. Some see it as filling a gap. But does this prevent young people from social capital, the ties in human interaction, that can often make anyone feel more connected to others, their community, and therefore less alone?
Vaile Wright:
It’s clear that young people are feeling disconnected and lonely. We did a survey recently where 71% of 18 to 34 year olds said that they don’t feel like they can talk about their stress with others because they don’t want to burden people. So how do we take that understanding and recognize why people are using these chatbots to fill these gaps while also helping people really appreciate the value of human connection?
I don’t want the conversation to always be AI versus humans. It’s really about what does AI do really well, what do humans do really well, and how can we capitalize on both of those things together to help people reduce their suffering faster?
Dana Taylor:
What’s the biggest takeaway that you’d like people to walk away with when it comes to chatbots and therapy?
Vaile Wright:
AI isn’t going anywhere. People for centuries have always tried to seek out self-help ways to address their emotional well-being. That used to be Google docking doctor. Now it’s chatbots. So we can’t stop people from using them. And as we talked about, there could be some benefits to it, but how do we help consumers understand that there may be better options out there, better chatbot options even, and helping them be more digitally literate to understand when a particular chatbot maybe is not only just not being helpful, but actually harmful.
Dana Taylor:
Vaile, thank you for being on The Excerpt.
Vaile Wright:
Thanks so much for having me.
Dana Taylor:
Thanks for our senior producers Shannon Ray Green and Kaylee Monahan for their production assistance. Our executive producers Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@usatoday.com. Thanks for listening. I’m Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of The Excerpt.

source

Chatbot therapy? Available 24/7 but users beware | The Excerpt – USA Today

Say Hello to Creator-Built AI Chatbots on