OpenAI CEO Sam Altman says ChatGPT will soon allow 'erotica' for adult users – USA Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
OpenAI CEO Sam Altman said Oct. 15 that his company’s decision to roll out age-gated features on its chatbot ChatGPT “blew up on the erotica point” more than he had intended.
Altman previously announced Oct. 14 in a post on X that the company will “relax” some restrictions on the artificial intelligence chatbot and release a version that “behaves more like what people liked about 4o.” OpenAI will allow mature content for ChatGPT users who verify their age on the platform starting in December, Altman said, after the chatbot was made restrictive for users in mental distress.
“As part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman wrote in the Oct. 14 post.
But after receiving backlash over the announcement, Altman said Oct. 15 in another X post that the company is not rolling back any restrictions related to “mental health” and defended the company’s “treat adult users like adults” principle.
“As AI becomes more important in people’s lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission,” Altman wrote on Oct. 15. “But we are not the elected moral police of the world.”
He added that ChatGPT users experiencing mental health crises will be treated “very different” and the chatbot would not be allowed to create “things that cause harm to others.”
The rolled back restrictions were part of an update introduced after a lawsuit filed in August against OpenAI by a family that claimed the chatbot had encouraged their son to take his own life, according to the BBC.
The British broadcaster reported that Matt and Maria Raine, parents of 16-year-old Adam Raine, filed the suit in California, arguing that the program validated his “most harmful and self-destructive thoughts.”
On Aug. 26, OpenAI published a note that said there have been moments where “our systems did not behave as intended in sensitive situations.” The company then published another note on Sept. 2 outlining restrictions the company would place on ChatGPT.
Jay Edelson, a lawyer representing the family, told the BBC that the restriction announcement was “OpenAI’s crisis management team trying to change the subject” and called for ChatGPT to be taken down.
“Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better,” Edelson said.
In the Oct. 14 social media post, Altman said that “we have been able to mitigate the serious mental health issues and have new tools.” In the Oct. 15 post, he said minors “need significant protection” as artificial intelligence technology permeates society.
USA TODAY reached out to Edelson for comment on the rollback of some of the restrictions but did not receive a response.
Contributing: Reuters

source

Scroll to Top