Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Radically innovative technologies like generative artificial intelligence (AI) and personalized chatbots are increasingly accessible to the general public. Major technological shifts such as these, while they have countless upsides, are bound to contain some drawbacks and dangers that have to be taken seriously. For example, there has been a great deal of attention paid to a number of sobering cases in which troubled youth interacting with AI platforms like ChatGPT have found these chatbots all too willing to reinforce negative inputs, even encouraging suicide. However, instead of having a serious conversation about how we can harness the benefits of personalized chatbots while mitigating potential harms, many lawmakers have leapt to banning minors from having access to AI platforms entirely.
For example, the U.S. Senate Judiciary Committee is expected to quickly mark up and vote on the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act, introduced by Sen. Josh Hawley (R-Mo.).This bill would mandate full age verification for access to any AI chatbot that “produces new expressive content or responses” and bans users under age 18 from using any ‘companion’ chatbot which “provides adaptive, human-like responses to user inputs.”
The GUARD Act, like other attempts to mandate age verification and ban minors’ access to general-use digital platforms, is unconstitutional, even if its proponents mean well. Like similar bills before it, the GUARD Act violates the First Amendment in several respects. There is ample court precedent affirming that blanket age verification mandates are an unconstitutional restraint on both adults’ and minors’ access to speech except in narrow circumstances such as limiting access to adult websites. As TechFreedom’s Andy Jung explains, the GUARD Act is likely to fail court scrutiny both because it is not content neutral (it would block minors’ access to a great deal of protected speech) and because it is far from a less-restrictive means to protect minors from potentially harmful content.
In order to separate underage users from others, the GUARD Act requires strict age verification, specifically barring chatbot developers from merely estimating user age or using self-attestation. This guarantees that all AI chatbot users would have to submit to invasive methods like biometric scans or uploading government IDs, with all of the accompanying privacy and security concerns that R Street has long warned these would create.
Even if, for the sake of argument, the bill were constitutional, an outright ban on minors’ access to generative AI is likely to do more harm than good. The proponents of such policies—mirroring previous attempts to ban minors from social media—focus on small numbers of worst-case outcomes from youth interactions with generative AI to the complete exclusion of all the ways that young people can benefit and learn from chatbots.
A recent Pew Research poll shows that a majority of U.S. teens use chatbots, and more of them say that AI bots have a positive impact on their lives than the opposite. In fact, teen users regard chatbots as both a learning aid and entertainment. Even for younger users, AI companions show potential as a key tool for personalized learning, highlighting how counterproductive it could be to enact blanket bans on all such tools according to age. Furthermore, as AI tools continue to enter the workforce across a huge variety of industries, kids and teens who grow up learning how to work with both the capabilities and limitations of AI chatbots and personalized agents will be much better equipped to succeed.
As with social media, a better approach to protecting kids from exposure to harmful interaction with AI products is to place parental choice at the core of any policymaking. A child’s parents or caregivers will always be better equipped to understand the level of access and supervision their individual children require with respect to digital tools. The government can play a role in helping provide resources that parents can turn to for education about AI safety. For example, the AWARE Act, introduced by Rep. Erin Houchin (R-Ind.), requires the Federal Trade Commission to develop an educational resource on companion chatbots.
And at the state level, Idaho’s recently-enacted SB 1227 directs public schools to establish guidelines for teaching K-12 students “the knowledge and skills required to understand what generative AI is, how it works, its appropriate and age-appropriate uses, and how to use it ethically, securely, and transparently,” while also providing resources for parents. Other states should consider following this model. In combination with the resources developed by private parties, and in addition to the increasing safety measures being implemented by AI platforms themselves, legislators can play a role in helping keep kids safe. However, they should not be in the business of deciding for parents when and whether their children can engage with chatbots or other emerging AI technologies.
1411 K Street N.W., Suite 900
Washington, D.C. 20005
Media Inquiries: pr@rstreet.org
(202) 525-5717
feedback@rstreet.org
Sign up for our newsletter to stay up to date with the latest research and learn about upcoming events.
Explore: