Character.AI to ban users under 18 from talking to its chatbots – AOL.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
For premium support please call:
For premium support please call:
Character.AI is banning users under 18 from interacting with its chatbot.
It follows concerns about AI's impact on young people and a lawsuit over a 14-year-old's suicide.
Character.AI cited feedback from regulators, safety experts, and parents for the decision.
Character.AI is banning users under 18 from engaging in conversations with its chatbots after facing scrutiny over how young people interact with its virtual companions.
The California-based startup announced on Wednesday that the change would take effect by November 25 at the latest and that it would limit chat time for users under 18 ahead of the ban.
It marks the first time a major chatbot provider has moved to ban young people from using its service, and comes against a backdrop of broader concerns about how AI is affecting the millions of people who use it each day.
Founded in 2021, Character.AI hosts virtual avatars that can take on the persona of real or fictional people.
A Character.AI spokesperson told Business Insider that it was "taking extraordinary steps for our company and the industry at large." They added that over the past year, the startup has invested in creating a dedicated experience for users under 18.
Character.AI said in a blog post that it was making the change after receiving feedback from "regulators, safety experts, and parents." The startup also said it would roll out age-gating technology and establish an AI safety lab to research future safeguards.
In February 2024, 14-year-old Sewell Setzer III died by suicide after talking with one of Character.AI's chatbots. His mother, Megan Garcia, filed a civil lawsuit against the company in October that year, blaming the chatbot for her son's death, alleging negligence, wrongful death, and deceptive trade practices. Character.AI said it does not comment on pending litigation.
Earlier this month, Character.AI took down a chatbot based on paedophile Jeffrey Epstein, following a story from the Bureau of Investigative Journalism.
OpenAI is also facing a lawsuit filed by the parents of a young person who died by suicide after talking to its chatbot. The suit alleges ChatGPT "actively helped" 16-year-old Adam Raine explore suicide methods over several months before he died on April 11. OpenAI previously told Business Insider that it was saddened by Raine's death and that ChatGPT includes safeguards.
Earlier this week, OpenAI said that about 0.15% of its more than 800 million weekly users send messages to ChatGPT each week about suicide, equivalent to more than one million people. OpenAI said in a blog post that it has updated ChatGPT to "better recognize and support people in moments of distress."
Read the original article on Business Insider
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

source

Scroll to Top