#Chatbots

Concerns over AI safety grow after lawsuit against OpenAI – 10TV

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
To stream 10TV on your phone, you need the 10TV app.
Next up in 5
Example video title will go here for this video
Next up in 5
Example video title will go here for this video

COLUMBUS, Ohio — The family of a 16-year-old boy is suing OpenAI, alleging that its chatbot, ChatGPT, contributed to their son’s death by offering guidance on suicide.
According to the lawsuit, the AI system provided the teen with specific advice on methods and even offered to draft a suicide note. The family claims the chatbot’s responses played a role in the tragedy.
In a statement, OpenAI extended its sympathies to the family and said it is reviewing the legal filing. 
On Tuesday, the company published a blog post stating, “We will keep improving — and we hope others will join us in helping make sure this technology protects people at their most vulnerable.”
As artificial intelligence becomes more integrated into daily life, experts are raising concerns about its role in mental health support — especially among young users.
Austin Lucas, associate director of the Ohio Suicide Prevention Foundation, said while AI can be a helpful tool, it’s still in early stages and prone to inaccuracies.
“It’s still so new. We’ve seen inaccurate answers come out of AI,” Lucas said.
Lucas emphasized that AI should be used cautiously and always alongside evidence-based practices.
“Research has come out about using AI in a therapist’s office as a tool. I think that’s how we need to think about AI — as a tool that can be helpful, but you should always rely on evidence-based best practices,” he said.
He also urges parents to monitor their children’s online interactions and have open conversations about AI. 
“Parents, make sure you know what your kids are doing. Have positive conversations about AI with children,” Lucas said.
When it comes to mental health crises, Lucas says human connection remains essential. 
“Maybe ChatGPT comes in the picture as a supplementary tool, but for now, we are promoting 9-8-8 because with AI, ChatGPT, you’re messaging — but you’re messaging a person that actually has this training with 9-8-8,” he said.
If you or someone you know is experiencing a mental health crisis, call or text the 988 Suicide & Crisis Lifeline for immediate support.

source

Concerns over AI safety grow after lawsuit against OpenAI – 10TV

OpenAI sued by parents of teen who