Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
California has become the latest state to enact comprehensive artificial intelligence safety legislation aimed at protecting children online, as policymakers’ perspective increasingly shifts from, “What can AI do?” to “How safe is AI?”
Gov. Gavin Newsom signed a package of bills on Oct. 13 establishing new requirements for social media platforms, AI chatbot providers, and app developers that serve minors.
For K-12 education companies, the new laws raise the bar on compliance, transparency, and product development, especially for those marketing AI-enabled tools to students or schools across California – home to more than 5.8 million K-12 students. Those who violate requirements can face more serious consequences, as outlined in the new rules.
“Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom, a democrat, said in a statement. “We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way.”
California’s new laws introduce several measures that affect how vendors design, market, and monitor AI-powered products used by children and teenagers. Those include:
“This is a strong initial attempt to put kids first in their mental health and social well-being,” especially as students are using AI, often without adult supervision or understanding, Jeffrey Riley told EdWeek Market Brief.
Riley, the former Massachusetts commissioner of elementary and secondary education, is the executive director of Day of AI, which provides AI literacy resources out of the Massachusetts Institute of Technology’s Responsible AI for Social Empowerment and Education, or RAISE, initiative.
As different states and departments of education issue their own versions of AI policies, vendors will need to be aware of the unique requirements of each agency, as well as the overall responsible development of AI, especially as the technology changes so rapidly, Riley said.
Earlier this year, Ohio became the first state to mandate the creation of AI frameworks in all public K-12 schools. Many other states, like North Carolina and Oregon, have either issued some form of AI guidance for students or are currently working on developing a model.
For companies serving the education market, California’s new laws indicate continued scrutiny over how AI tools interact with minors, but they also represent an area of potential competitive advantage as districts increasingly demand assurances around ethical and responsible AI design.
“Companies are recognizing that if they’re going to do business, they have to be thoughtful about what AI use looks like,” Riley said. “If I was an AI vendor, I’d want to make sure I have the answers because [leaders] are starting to ask the questions, which maybe in the past, they haven’t.”