Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Now
82°
Sat
89°
Sun
90°
by Jane Davenport
LAS VEGAS (KSNV) — Artificial intelligence companies could face criminal penalties under a new bill that would restrict minors’ access to AI companion chatbots and require stronger safeguards for users.
The proposed federal legislation, called the GUARD Act, is co-sponsored by Nevada Senator Catherine Cortez Masto. The bill would ban AI companions for minors by imposing age regulations and would put restrictions on sites that are relatively easily accessible to children and teenagers.
Doctors say the technology can carry harmful effects for young users.
“I have concerns to the point of being scared,” said Dr. Sid Khurana, medical director at Nevada Mental Health and a father of two young children.
Dr. Khurana said he sees the impacts of both social media and AI on young minds daily and compared a lack of oversight online to neglect in other areas of a child’s life.
“Lack of digital supervision in my mind is similar to lack of physical or emotional supervision,” he said.
He said children who default to using AI for answers can dampen their development, warning that premature exposure could create future problems.
“The idea that you just go and put it in ChatGPT is not the way to get an answer; they need to learn the concepts because their brain is developing,” Dr. Khurana said.
The GUARD Act, which stands for Guidelines for User Age-Verification and Responsible Dialogue Act of 2025, aims to limit the negative impact of AI chatbots on youth.
The bill states: “These chatbots can manipulate emotions and influence behavior in ways that exploit the developmental vulnerabilities of minors. The widespread availability of such chatbots exposes children to physical and psychological safety risk, including grooming, addiction, self-harm, and harm to others.”
Khurana also raised concerns about what he described as “other ulterior motives” that could influence children, including sexually explicit content.
He noted that young people formed online connections even before AI, and said some of those relationships were harmful while others were meaningful.
“But those are real human beings that you can actually FaceTime or video with and know there is a family, know there is a person. These chatbot-based relationships clearly indicate to me there is some missing thing in somebody's life,” Khurana said.
Under the GUARD Act, chatbots would be required to “clearly and conspicuously disclose to the user that it is artificial and not real” at the beginning of a conversation and every 30 minutes. The bill would also make it illegal for AI chatbots to encourage or promote physical violence.
AI companies would be required to implement age verification on their sites to avoid the solicitation of minors, effectively banning young people’s use of AI companions designed to simulate emotional and interpersonal friendships.
One local mother, Yadira, said she supports restrictions. “I would support a bill that would put restrictions on A-I,” she said. Yadira said her 16-year-old son refuses to use AI and that they both disagree with where it is heading. “I think mentally it causes a lot of harm,” she said.
Khurana said the rise of AI companions raises broader concerns about replacing human relationships. “Most people don't have a robot as their best friend,” he said.
2026 Sinclair, Inc.