Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
As the national debate around how to protect children online considers new age verification practices, enduring fears about what information companies are collecting and sharing about users are adding to the tension.
The majority of adults are worried about how long companies will store children’s age data, and whether that information will be sold, according to a new research brief from Common Sense Media, which provides ratings and reviews for families and educators on the safety of media and technology.
Of the 1,096 people surveyed across the U.S., 80% say they are concerned about companies permanently storing this information. And 86% are concerned about it being sold or shared, according to the brief.
The survey was run by the National Opinion Research Center, or NORC, at the University of Chicago.
Concerns about how young people’s data will be used by technology providers are not new; the education space has long grappled with these types of worries.
But this fear around age data is notable as education companies watch to see if any new data security requirements are attached to potential verification mandates.
Federal lawmakers are currently considering these ideas, including through a bill that would require AI chatbot providers to verify the age of their users and ban minors from using digital companions — legislation that would affect ed-tech providers with AI chatbot capabilities in their offerings.
In California, a new law mandates that any provider making a chatbot available in the state must now provide notifications to minors that identify responses as AI-generated and suggest a break every three hours. The state also now requires operating systems and app stores to have age-assurance mechanisms.
Common Sense Media is among those advocating for age-based protections that consider data security.
“Kids deserve age-appropriate online experiences,” CEO James Steyer said in a statement. “And parents deserve a reliable way to keep their kids safe.”