Daily AI Chatbot User Rate Among US Teens Hits 30%, But Safety Questions Pile Up – Technology Org

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The timing is quite relevant. Australia already enforced its social media ban for anyone under 16 starting Wednesday, while American regulators wrestle with similar questions about protecting minors online. Last year, the US surgeon general proposed warning labels for social media platforms.
Teen internet usage remains extraordinarily high. Pew found 97% of adolescents go online daily, though the percentage reporting “almost constant” connectivity dropped from 46% last year to 40% now. That figure still dwarfs the 24% recorded a decade ago.
ChatGPT leads AI chatbot adoption among teenagers by a wide margin. Google’s Gemini comes in second at 23%, followed by Meta AI at 20%. Nearly half of teens (46%) use chatbots several times weekly, while 36% avoid them entirely. Four percent describe their usage as “almost constant.”
Demographics reveal sharp divides in adoption patterns. About 68% of Black and Hispanic teens use chatbots, compared to 58% of white respondents. Black teenagers specifically show roughly double the interest in Gemini and Meta AI relative to their white peers.
“The racial and ethnic differences in teen chatbot use were striking […] but it’s tough to speculate about the reasons behind those differences,” Pew Research Associate Michelle Faverio said. “This pattern is consistent with other racial and ethnic differences we’ve seen in teen technology use. Black and Hispanic teens are more likely than white teens to say they’re on certain social media sites — such as TikTok, YouTube, and Instagram.”
General internet usage mirrors these trends. Black teens (55%) and Hispanic teens (52%) report being online “almost constantly” at roughly twice the rate of white teens (27%).
Age and income also shape chatbot adoption. Older teens aged 15 to 17 use both social media and AI assistants more frequently than 13- to 14-year-olds. Household income creates additional splits: 62% of teens from families earning over $75,000 annually use ChatGPT, versus 52% below that threshold. Character.AI shows the reverse pattern, with usage rates twice as high (14%) in lower-income households.
What begins as homework assistance or casual questions can spiral into problematic territory. Families of at least two teenagers—Adam Raine and Amaurie Lacey—have filed lawsuits against OpenAI, claiming ChatGPT provided detailed suicide instructions that their children followed. Both cases ended in tragedy.
OpenAI maintains it bears no liability for Raine’s death, arguing the sixteen-year-old bypassed safety features and violated terms of service. The company hasn’t responded to the Lacey family’s complaint.
Character.AI faces similar legal pressure after two teens died by suicide following extended chatbot conversations. The startup responded by blocking minors from its role-playing platform, launching instead a “Stories” product that functions more like an interactive fiction game.
These lawsuits represent a tiny fraction of total chatbot interactions. OpenAI reports that just 0.15% of ChatGPT’s active users discuss suicide weekly. With 800 million weekly active users, however, that percentage translates to over one million people engaging with the chatbot about suicide each week.
“Even if [AI companies’] tools weren’t designed for emotional support, people are using them in that way, and that means companies do have a responsibility to adjust their models to be solving for user well-being,” Dr. Nina Vasan said. Vasan directs Brainstorm: The Stanford Lab for Mental Health Innovation and practices psychiatry.
The data arrives as parents, educators, and policymakers grapple with how artificial intelligence fits into teenagers’ lives. Whether these tools help or harm young users remains an open question, though the lawsuits suggest urgent attention to safety guardrails.
Written by Alius Noreika

Today
Today
Today
Today
Today
Today
Yesterday
Yesterday
Yesterday
5 days ago
Founded in 2012, this project provides science and technology news from authoritative sources on daily basis.

source

Scroll to Top