#Chatbots

A new lawsuit against OpenAI could challenge rule protecting online content – Semafor

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The parents of a 16-year-old who died by suicide are suing OpenAI, claiming its chatbot contributed to the death of their son by, at times, deterring him from seeking help and answering his questions about suicide methods, The New York Times reported. “OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency,” the complaint said.

In a statement to the Times, OpenAI said that while ChatGPT includes safeguards, like referring people to helplines, they “work best in common, short exchanges.” It added: “We’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.” The company is working to “make ChatGPT more supportive in moments of crisis,” it told the Times.

It’s the second lawsuit blaming an AI chatbot for contributing to a young adult’s death, in addition to an ongoing lawsuit playing out in Florida over a teen’s relationship with a Character.ai chatbot.

The big question for OpenAI is whether it will attempt to use Section 230 of the Communications Decency Act as a defense — which shields platforms from culpability for what users post on them. That framework, however, has been challenged in the AI age, because it’s those companies’ servers providing messages through their chatbots, rather than external users.

CEO Sam Altman has previously said AI companies shouldn’t be relying on that defense. When asked if that law applies to OpenAI’s product in a Senate hearing in 2023, he responded, “I don’t think Section 230 is even the right framework.”

Character.ai’s lawyers attempted to dismiss its case on First Amendment and Section 230 grounds, but the Florida judge wrote its lawyers “fail to articulate why words strung together by an LLM are speech.” While the judge didn’t directly address the Section 230 defense, the ruling is an early signal that courts may be less willing to extend blanket immunity to AI-generated content than they have to social media posts.
Transparent news, distilled views, and global perspectives.
Sign up for Semafor

Technology

source