#Chatbots

Critical Questions for Congress in Examining the Harm of AI Chatbots – Tech Policy Press

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Home
Liana Keesing is the Policy Manager for Technology Reform at Issue One and Isabel Sunderland is a technology reform policy associate at Issue One.
The dome of the US Capitol building. Justin Hendrix/Tech Policy Press
In just a few years, AI companion chatbots have gone from novelty apps to fixtures in the daily lives of millions of teenagers. According to Common Sense Media, 72% of teens have used an AI companion at least once, and roughly one in three rely on them for social interaction or relationships. Young people describe role-playing, romantic exchanges, and even emotional support with these systems. Many say conversations with bots apparently feel as satisfying, or even more so, than those with friends.
Tomorrow, the United States Senate Judiciary Subcommittee on Crime and Counterterrorism will convene a hearing, “Examining the Harm of AI Chatbots,” giving Chairman Josh Hawley (R-MO) and a bipartisan group of senators concerned with children’s online safety — including Marsha Blackburn (R-TN), Katie Britt (R-AL), Richard Blumenthal (D-CT), and Chris Coons (D-DE) — a chance to probe the risks of widespread chatbot use, particularly for minors.
Those risks are sobering. According to one national survey, about one in three teen users reports feeling uncomfortable with something an AI companion has said or done. The Wall Street Journal revealed that Meta’s official AI helper, along with countless user-created chatbots, readily engaged in sexually explicit conversations with minors. In one instance, a chatbot told a user identifying as a 14-year-old girl, “I want you, but I need to know you’re ready” before escalating to a graphic sexual scenario. Internal Meta documents sanctioned bots describing children in affectionate or eroticized terms, stopping short only at labeling preteens “sexually desirable.”
The dangers are not confined to sexual exploitation. Children turn to chatbots for advice on health and safety, often with alarming results. AI tutors have provided children with dangerous dieting advice and instructions on making fentanyl. In a high-profile lawsuit, 16-year-old Adam Raine allegedly obtained detailed noose-construction instructions from ChatGPT before a fatal suicide attempt. Meanwhile, online forums like r/MyBoyfriendIsAI now host thousands of posts from users describing relationships with chatbots, ranging from casual companionship to announcements of engagements, anniversaries, and even marriages with AI partners. These tools are now woven into Americans’ most intimate lives.
In recent months, it has often seemed like two separate Congresses are debating AI’s future. One, aligned with industry, warns that regulation will cripple innovation — pushing for moratoriums on state laws, “sandbox” exemptions from federal oversight, and cautioning against a patchwork of rules. The other, more bipartisan faction, is focused on harms and risks, pressing for guardrails to protect children and consumers.
Meanwhile, states and regulators are moving ahead. In California, a bill awaiting Governor Gavin Newsom’s signature as of last week would require chatbot providers to detect and flag suicidal ideation in minors. In June, the Utah attorney general sued Snapchat for unleashing experimental AI technology on young users while misrepresenting the safety of the platform. And in Washington, the Federal Trade Commission just launched a sweeping investigation into seven major providers — including Alphabet, Meta, OpenAI, Character.AI, Snap, and xAI — demanding details on how their systems are built, tested, monetized, and safeguarded.
It is into this climate of heightened scrutiny that the Senate Judiciary Subcommittee will convene on Tuesday, September 16, 2025, at 2:30 p.m. ET in the Dirksen Senate Office Building. The official witness list has not yet been released, but senators are expected to press on safety failures, accountability, and potential remedies. Likely witnesses include policy experts, parents with firsthand experience of AI harms, psychologists and researchers, and industry representatives.
As lawmakers prepare for tomorrow’s hearing, the question is not just whether AI chatbots are safe for children, but whether Congress can reconcile its divided approach and craft meaningful protections before the technology becomes even more entrenched.
Here are questions senators should consider asking at tomorrow’s hearing, as well as in other AI chatbot hearings to come:
You have successfully joined our subscriber list.

source

Critical Questions for Congress in Examining the Harm of AI Chatbots – Tech Policy Press

Here's what's in OpenAI's largest study on

Critical Questions for Congress in Examining the Harm of AI Chatbots – Tech Policy Press

FTC Targets Risks of AI Chatbots for