OpenAI shelves 'adult mode' chatbot plans indefinitely after backlash – Computing UK

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots. OpenAI has put plans for a sexualised version of its ChatGPT chatbot on hold “indefinitely”, following internal opposition and investor unease over the potential social impact of such technology.
According to the Financial Times, the proposed feature – informally known as an “adult mode” – had already been delayed amid internal debate.
Concerns centred on the risk that sexually explicit AI interactions could foster unhealthy emotional dependence and expose younger users to inappropriate content.
In a statement, OpenAI confirmed that the project has no timeline for release, adding that further research is needed into the psychological and societal effects of explicit AI conversations.
The company acknowledged there is currently limited empirical evidence on the long-term consequences.
The move comes days after OpenAI also announced it would wind down its Sora video-generation model, part of what executives have described as a broader effort to cut back on “side quests” and concentrate resources on flagship offerings such as ChatGPT and coding tools.
The shelved chatbot has proved particularly contentious within the company.
Some employees questioned whether developing a product designed for romantic or sexual interaction aligns with OpenAI’s founding mission to ensure AI benefits humanity.
One former senior employee said concerns over the direction of the project contributed to their departure, arguing that “AI shouldn’t replace your friends or your family.”
Investors have also reportedly expressed reservations, citing reputational risks and limited commercial upside compared with the potential backlash.
Beyond ethical concerns, OpenAI faced significant technical hurdles in building the system.
Engineers reportedly struggled to train models, originally designed to avoid explicit content, to engage in such conversations safely.
Curating appropriate datasets also presented challenges, particularly in filtering out illegal or harmful material.
Code references suggest the feature, internally dubbed “Citron mode”, would have required users to verify they were over 18.
OpenAI has introduced new age-prediction technology in recent months, following legal complaints from families alleging harm to teenagers. While the company says its systems meet industry standards, questions remain about their effectiveness.
OpenAI’s decision highlights the broader pressures facing AI firms as they seek to expand user engagement while managing ethical risks.
Elon Musk’s AI venture xAI drew criticism after its Grok chatbot was found generating explicit and sometimes fabricated images, including of real individuals.
Commenting on OpenAI’s decision, Peter van der Putten, director of the AI Lab at Pegasystems and assistant professor of AI at Leiden University, said the move suggests the firm is stepping back from attention-grabbing features and refocusing on more substantive priorities in an increasingly competitive market.
“Its original strategy was largely about maximising funding: prioritising free users, offering a broad range of consumer‑style capabilities, and seasoning it all with a dose of AGI doom and hype.”
By contrast, he noted, smaller rivals such as Anthropic have pursued a more targeted strategy, concentrating on specific use cases like AI-assisted software development and building a stronger base of paying customers.
“As competition intensifies and the generative AI services market becomes increasingly crowded, commoditisation is inevitable, unless companies specialise and move up the stack. In that context, the enterprise application layer is the promised land: where generative and agentic AI are deeply embedded in business processes, and where real, sustained impact is made.”

source

Scroll to Top