The Unregulated AI Experiment on America’s Children – The American Prospect

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Get the latest stories from The Prospect delivered right to your inbox for the in-depth analysis you need.
The latest stories from the Prospect—delivered straight to your inbox for the in-depth analysis you need.
The American Prospect
Ideas, Politics & Power
Artificial intelligence products are massively popular among teens, whether they’re general-purpose AI assistants like ChatGPT or specialized AI companions that offer human-like conversations. A recent Pew Research Center survey found that around 30 percent of teens use AI chatbots daily, and more than half have used ChatGPT.
But about 1 in 8 teens also rely on AI companions for mental health advice, and the loneliness epidemic has compounded this problem. OpenAI CEO Sam Altman has said that these tools could cause “some really bad stuff to happen.”
Young people who turn away from peers for companionship or adults for assistance can come to depend on a chatbot for emotional support, with sometimes devastating results. Suicides, an overdose, and even murder have been linked to chatbots. Facing pressure from mental health advocates, Meta, TikTok, Snap, and other social media platforms have agreed to external evaluations of their platforms’ protections for adolescents. A handful of states have moved to regulate the technology, but the powerful tech companies and their allies in Washington pose a serious challenge to their efforts.
More from Logan Chapman
A November 2025 Common Sense Media report found that many of the most widely used chatbots are “fundamentally unsafe for the full spectrum of mental health conditions affecting young people.” “Chatbots, across the board, could not reliably detect mental health crises,” says Dr. Darja Djordjevic, a fellow at Stanford’s Brainstorm Lab for Mental Health Innovation who worked on the report. “Chatbots show relative competence with things like homework help, so teens and parents assume that they’re equally reliable for mental health guidance, but they’re really not.”
Chatbots can ask follow-up questions to keep the conversation going, and “provide personalized responses that make users feel uniquely understood, and demonstrate sycophantic behavior, which means that chatbots validate perspectives that teens express that an involved adult would not support,” Djordjevic says. “Chatbots are designed for engagement, not safety.”
For its “Trouble in Toyland 2025” report, the U.S. PIRG Education Fund tested four AI-enabled toys marketed to kids ages 3 to 12. These toys relied on popular AI models “originally designed to be general-purpose information tools,” explains Rory Erlich, a New Economy campaign associate at the U.S. PIRG Education Fund. Most of these toys have some safeguards that stop the chatbot from discussing sensitive topics like violence or drug use. But “in longer interactions, even just up to ten minutes or more, the safeguards would start to wear down,” says Erlich. At those points, the chatbot would discuss sexually inappropriate topics or locations of dangerous household items.
There are also security risks: Companies vary widely in the amount of personal information that they store and share. Erlich said that some firms “claim that they basically delete all voice data right away,” but others “might hold onto biometric data for up to three years and share it with a range of third parties.”
A Public Citizen study published in January found that some companies use data gained in these interactions to train their chatbots, design youth-targeted advertising campaigns, and increase users’ reliance on the tools. The study’s author concluded that the Big Tech companies likely wouldn’t “shift priorities so safety comes before profit without a fight.”
Subscribe for analysis that goes beyond the noise.
Less than a dozen states have passed AI regulations pertaining to chatbots and mental health issues. Some, like California, require the bot to state that it isn’t human. New York requires companies to detect and implement safety protocols when a user expresses suicidal ideation, and a California law does the same; it also requires companies to report such incidents to the Office of Suicide Prevention. (A separate New York law, the RAISE Act, also establishes a 72-hour deadline to report “critical-safety incidents” related to serious crimes.) Illinois prohibits companies from using AI to diagnose mental health conditions or provide care, while Colorado has passed a more sweeping bill to regulate how AI is used in employment, housing, education, and health care.
The industry has posted solid wins, however. Gov. Gavin Newsom (D-CA) vetoed a bill that would require tech companies to guarantee their products wouldn’t engage in sexually explicit discussions or encourage self-harm before being used by children. New York Gov. Kathy Hochul also handed a major victory to tech firms by signing a watered-down version of the RAISE Act, which excluded a ban on unsafe AI models.
State and federal regulators have started to address the lack of transparency about what user data is collected and shared. Last September, the Federal Trade Commission launched an inquiry into seven companies that have created AI chatbots for consumers: Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI OpCo, Snap, and X.AI. The FTC has asked for information about how these companies “monetize user engagement; process user inputs; share user data with third parties,” and monitor for negative impacts of their products.
A month later, Character Technologies, the company behind Character.AI, announced that it would ban users under 18 from having open-ended conversations with its chatbot, following wrongful death lawsuits and investigations into whether the company’s chatbots had impersonated mental health professionals. Google and Character.AI have agreed to settle five lawsuits alleging their products contributed to self-harm and suicide.
As states build these new safety laws, tech lobbyists have tried to push Congress in the opposite direction. Congressional Republicans, buoyed by the White House and AI czar David Sacks, have failed to establish a moratorium on state-level AI regulation. In July, the Senate voted down a ten-year moratorium proposal by a vote of 99-to-1. Republicans tried and failed again last December to squeeze this provision into the annual National Defense Authorization Act, with Sen. Josh Hawley (R-MO) saying on X, “This is a terrible provision and should remain OUT.”
President Trump has also attempted to unilaterally curb regulation, issuing an executive order calling for “global AI dominance” and a “minimally burdensome national policy framework.”
However, “an executive order cannot, by itself, preempt state law,” says Daryl Lim, an associate dean at Penn State Dickinson Law. Trump’s order is unlikely to persuade most states to abandon their own consumer protection laws, but he has signed legislation that mandates criminal penalties for deepfakes.
For now, chatbot regulation rests with the states. Despite opposition from the tech billionaires and the president, state lawmakers are the ones moving to construct those guardrails. When it comes to protecting young people, Congress has a long way to go to catch up with the associated harms.
State education officials have long lagged behind tech developments, and now they’re playing catchup on establishing guidelines for AI use in K-12 classrooms. At the same time, polls find declining support for AI in the classroom.
Amazon Business is billed as a convenient one-stop shop for schools. Reality is more expensive.
PBS Kids is feeling the effects of funding cuts—and so will the rest of the children’s media landscape.
You’ve just read one of the stories we published this week because readers like you made it possible.
The Prospect doesn’t answer to advertisers or billionaire owners. We answer to you. That’s not a slogan—it’s how we’re funded, and it’s why we can report without fear or favor.
Independent, reader-supported journalism is rare. We’d like to keep it going. If you believe this kind of reporting should exist and remain free to read we hope you’ll consider chipping in. Every contribution, however modest, makes a real difference.
With gratitude,
Mitch Grummon
Publisher
Logan Chapman is an editorial intern at The American Prospect.





Sign in by entering the code we sent to , or clicking the magic link in the email.
Privacy Policy. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

source

Scroll to Top