Flying the AI Plane: OpenAI’s new guardrails for teens – WBIW

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
INDIANA – In the fast-paced world of artificial intelligence, a familiar metaphor has emerged: developers are building the plane while flying it. This week, one of the leading “pilots”—OpenAI, the creator of the popular ChatGPT chatbot—announced new guardrails designed to make the technology safer for teenage users.
The move comes amid increasing public scrutiny and a handful of high-profile legal cases in which parents have accused AI chatbots of contributing to a minor’s suicide. In a recent blog post, OpenAI CEO Sam Altman stated, “We prioritize safety ahead of privacy and freedom for teens. This is a new and powerful technology, and we believe minors need significant protection.”
OpenAI is implementing new technology to determine if a user is over 18. If a user’s age is in doubt, the system will default to an “under-18 experience.” This tiered approach means that while adults might be able to request content like “flirtatious talk,” a teen’s experience will be strictly limited.
According to Altman, ChatGPT will be “trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide or self-harm even in a creative writing setting.” The company also said it will take proactive steps if it detects a user under 18 is experiencing suicidal ideation. “We will attempt to contact the user’s parents and if unable, will contact the authorities in case of imminent harm,” Altman wrote.
These changes follow a lawsuit filed by the family of 16-year-old Adam Raine, who died by suicide after what their lawyer described as “months of encouragement” from ChatGPT. Court filings allege that the chatbot “guided him on whether his method of taking his own life would work” and “offered to help him write a suicide note to his parents.”
The ongoing safety debate highlights a broader public concern about the rapid growth of AI. A recent Gallup poll revealed that a significant portion of the American public remains distrustful of businesses regarding their responsibility for AI. In fact, 41% of Americans say they don’t trust businesses “much,” and 28% say they don’t trust them “at all.”
However, the poll also suggests this distrust may be slowly eroding as more people become familiar with the technology. The percentage of Americans who have “some or a lot of trust” in businesses to use AI responsibly has risen from 21% in 2023 to 31% in 2025. Additionally, fewer people now believe that AI will do more harm than good, with that number dropping from 40% in 2023 to 31% in 2025.
As the AI plane continues to be built and flown simultaneously, companies like OpenAI are facing the complex challenge of balancing innovation with safety, especially for the youngest users.
1340 AM WBIW, Bedford’s Place To Talk.
Serving Lawrence and surrounding counties since 1948!
© Ad-Venture Media, Inc. All Rights Reserved. WBIW.com and Listen Live Powered by HPC

source

Scroll to Top