Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Daily e-Edition
Evening e-Edition
Sign up for email newsletters
Sign up for email newsletters
Daily e-Edition
Evening e-Edition
Trending:
With roughly two-thirds of surveyed American teens reporting chatbot use, a bipartisan group of Pennsylvania lawmakers think it’s time the state mandated safety protocols to reduce the risk that such technology could harm young users’ mental health.
State efforts to force America’s powerful tech companies to comply with regulations remain at odds with the stated preferences of the Trump administration, whose National AI Legislative Framework says, “a patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”
But Lehigh Valley state Sen. Nick Miller, a co-sponsor on Pennsylvania’s bill to implement safety protocols for AI chatbots, says state lawmakers have a responsibility to protect youth.
“This is something we should be doing,” Miller said, adding that the “need is real” and arguing that a likely legal battle would be worth it.
The bipartisan state Senate bill is now under consideration in the state House.
Here’s what you need to know about the effort.
Pa. Senate Bill 1090 is relatively narrow compared with some of the more wide-ranging regulations other states have proposed. It focuses on mandating design features that aim to protect vulnerable users of AI-powered chatbots.
Those design features include: disclosing that chatbots are not human, reminding users to take breaks, and providing referrals to crisis services, such as the crisis hotline 988, if users indicate thoughts of suicide or self-harm.
Miller’s team said the Pennsylvania legislation draws on bills from states such as South Carolina and New York.
State-level bills across the country are in various stages of consideration and implementation. South Carolina’s, signed into law in February, is an expansive version of what has come to be known as Age-Appropriate Code Design legislation.
The legislation seeks to improve safety for young users of online applications, such as social media platforms, by mandating design elements that allow for easier control over a user’s experience on a website or application.
Those mandated controls include the ability to: limit use time, limit the financial value of purchases that can be made, block messages and comments from accounts not already connected to a minor’s accounts, restrict visibility of a minor’s account, limit location sharing, opt out of personalized recommendation algorithms and restrict targeted ads.
New York’s attorney general has signed onto high-profile lawsuits and worked with legislators on regulations that block social media platforms from providing “an addictive feed” to minors without parental consent.
New York, where lawmakers are also working on a bill that is similar to Pennsylvania’s proposed legislation in its focus on mandating safety features for AI-powered chatbots, ranks highly on the Childhood Index, which highlights policies that regulate technology use for youth.
The index is produced by the Anxious Generation Movement, a nonprofit inspired by Jonathan Haidt’s bestseller “The Anxious Generation.”
The book, published in 2024, resonated with parents, educators and other child safety advocates who found its arguments tying technology use to a rise in adolescent mental health issues compelling.
Highlighted policies on the Childhood Index include cellphone restrictions in schools, social media age minimums and the kind of design safety regulations being considered in Harrisburg.
Pennsylvania does not rank highly on the Childhood Index, although many of the policies the nonprofit promotes, such as statewide restrictions on phones in schools, are under consideration in the General Assembly.
The index notes that Gov. Josh Shapiro has endorsed statewide phone-free schools legislation and supported safety protections related to AI companion bots — digital characters designed to respond to users in a conversational manner — and that Attorney General Dave Sunday has supported legal efforts to regulate AI safety for youth.
Recent high-profile legal cases have seen juries side with plaintiffs against Big Tech, including a Los Angeles jury that found Meta and YouTube liable for harms to children and a New Mexico jury that held Meta responsible for knowingly promoting a harmful product designed to be addictive.
As tech companies file appeals and face further litigation, state lawmakers are endorsing legislation that would impose consequences for failing to implement safety controls into AI-powered products.
Lourdes Sánchez, a school psychologist who specializes in student mental health and well-being and works with Allentown School District students, said in an email that her chief concerns about AI companion bots include “dependency, reinforcement of maladaptive thinking, data privacy risks, and the potential substitution of these tools for genuine therapeutic relationships.”
Students often share highly personal information with AI-powered chatbots and companion bots, Sánchez said, which creates “significant data vulnerability.”
“While these systems can simulate empathy, they cannot form authentic human relationships,” Sánchez said. “This may leave students feeling ‘heard’ while avoiding the deeper work of building real connections, potentially increasing social isolation over time.”
Even with safety features, AI tools are “not reliably equipped to recognize or respond to mental health crises,” Sánchez said, adding that the consequences of missing or mishandling a serious distress signal can be severe or even fatal.
Students experiencing a mental health crisis are encouraged to speak with a highly trained professional in the school district, such as a school counselor, social worker or school psychologist.
The Allentown School District does not permit access to AI companions on school networks. Students are limited to authorized AI tools that have been reviewed for safety and privacy, including Gemini — available to all students — and the sixth grade Coursemojo pilot.
‘Basically a teacher online’: How an AI-powered chatbot guides both students and teachers in Allentown
Google for Education offers a version of Gemini that contains privacy protections, including assurances for district users that their data, documents and prompts are not used to train the company’s public models, ASD spokesperson Melissa Reese noted.
The district established a mental health task force three years ago. The overall goal is to “develop informed, thoughtful recommendations that prioritize student safety, ethical use of technology, and overall well-being,” Sánchez said.
Lawmakers have a role to play, Sánchez added, saying they “should require clear, front-facing disclosure, at the point of interaction, not buried in terms of service, that these tools are not licensed clinicians, cannot provide therapy, and have significant limitations, particularly in responding to mental health crises.”
Students in Pennsylvania cannot independently consent to mental health treatment until age 14, so lawmakers should be especially vigilant about how younger children and early adolescents interact with these technologies, Sánchez said, adding that age-appropriate safeguards and parental transparency are essential.
“In addition, legislation should clearly define accountability and liability when AI systems fail to appropriately respond to a student in crisis. Without meaningful accountability, there is little incentive for companies to prioritize safety over engagement,” Sánchez said.
Lawmakers should also invest in independent, longitudinal research on the impact of AI companion bots on adolescent mental health, Sánchez said.
“The current evidence base is limited and largely industry-funded,” Sánchez said. “Publicly funded research is essential to ensure that future policy decisions are grounded in unbiased data and real-world outcomes.”
Data on Allentown students’ use of AI chatbots and companion bots is limited, Sánchez said, adding that national trends are concerning.
In addition to the 2025 Pew Research Center research that found roughly two-thirds of surveyed American teens reported chatbot use, a 2025 Common Sense Media survey found 72% of surveyed teens had used AI companion bots, with 52% reporting regular use (a few times a month or more).
The Common Sense Media survey highlighted findings include the following:
• One-third of surveyed teens use AI companion bots for social interaction and relationships.
• Trust in the tools varies, with older teens expressing more skepticism of information or advice that AI companions bots provide.
• Nearly one-third of surveyed teens found AI conversations “as satisfying or more satisfying than human conversations.”
• One-third of surveyed AI companion bot users have chosen to speak to an AI companion bot over a human about something important or serious.
• One-third of surveyed AI companion bot users said they’d felt uncomfortable with something an AI companion bot said or did.
• 80% of surveyed AI companion bot users said they spend more time with human friends than with AI companion bots.
Based on its research, Common Sense Media recommends that no one under 18 use AI companion bots.
If you or someone you care about is experiencing a mental health crisis, phone and text support is available through the 988 hotline.
Copyright 2026 The Morning Call. All rights reserved. The use of any content on this website for the purpose of training artificial intelligence systems, algorithms, machine learning models, text and data mining, or similar use is strictly prohibited without explicit written consent.