Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
New research suggests that an AI chatbot could be used in couples therapy. Photo: 123rf
A Swiss researcher says AI chatbots have the potential to be used to resolve relationship conflicts.
New research suggests using AI can be just as effective as traditional therapy, with most couples who took past in the study saying they really enjoyed it.
The study’s author, Laura Vowells from University of Lausanne, Switzerland, told Midday Report an AI Chatbot could act as an impartial moderator, as opposed to talking about relationship issues with a friend.
“When you go and talk to your best friend, your best friend is probably going to take your side,” she said.
“Whereas an AI is going to be kind of an impartial person there that isn’t going to judge anyone either you or the partner, and is just going to listen to you and give evidence based advice.”
Vowells said about 85 percent of those who took part in the study had a positive experience.
“They have a good experience with the chatbot, they would want to talk to it again,” she said.
“They’re usually very surprised by how human it sounds.”
Vowells explained they used a text-based AI in the study, but said they had looked into voice-based AI as well.
“People are always very positively surprised the AI is,” Vowells said.
They were also working on a risk assessment protocol to make sure the chatbots are safe to use, she said.
“We have a separate AI, we call it an AI supervisor, that monitors the conversation for any signs of risk, and if there is any risk that comes up then the AI supervisor can kind of swap the AI that’s talking to the user to the risk assessment AI and then complete a risk assessment,” she said.
An email would then be sent to the human researcher, who can check the transcript and follow up, Vowells said.
“We’re really trying to make sure that we’re crossing our T’s and everything is as safe as possible.”
She thought the Chatbot could become more mainstream in the future.
“I think a lot of people are already talking to ChatGPT about their relationship problems, about their mental problems, about their other troubles,” she said.
“Obviously the problem with ChatGPT is that ChatGPT isn’t taught to be a therapist, it doesn’t have the same safeguards in place and it doesn’t have the same kind of research backing as, kind of, more therapy oriented chatbots would have, but of course ChatGPT is available for everyone, easy to access, so it’s already being used a lot.”
“I think it will increase, and more and more people will use them.”
Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.
Copyright © 2025, Radio New Zealand
The University of the South Pacific vice-chancellor says artificial intelligence could democratise education in the region. Audio
That is not to say Santa is replacing his elves with robots, but it certainly is changing some people’s holiday habits. Audio
Chat GPT is here and changing education, with new versions of the technology making the use of it harder to detect.
Gov-GPT cannot make anything up or think for itself, and it can only summarise information and bring things together, Callaghan Innovation says.
for ad-free news and current affairs
New Zealand RSS
Follow RNZ News