Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Gov. Gavin Newsom vetoed AB 1064 , but he said he would work with the state Legislature next year on a bill to ensure kids could use AI in a “safe and age-appropriate” manner.
“This is an important step in California’s leadership around all things AI, both innovation and also ensuring that AI serves the public interest,” state Sen. Scott Wiener said of Senate Bill 53, which is he hopeful Gov. Gavin Newsom will sign into law.
Assemblymember Rebecca Bauer-Khan (seen in 2022), the author of AB 1064: “As children move from social media to AI, we must ensure AI is safe for our kids.”
Gov. Gavin Newsom vetoed AB 1064 , but he said he would work with the state Legislature next year on a bill to ensure kids could use AI in a “safe and age-appropriate” manner.
“This is an important step in California’s leadership around all things AI, both innovation and also ensuring that AI serves the public interest,” state Sen. Scott Wiener said of Senate Bill 53, which is he hopeful Gov. Gavin Newsom will sign into law.
Assemblymember Rebecca Bauer-Khan (seen in 2022), the author of AB 1064: “As children move from social media to AI, we must ensure AI is safe for our kids.”
Child and consumer advocates hoping Gov. Gavin Newsom would sign legislation intended to protect kids from harmful manipulation by artificial-intelligence chatbots were disappointed, but many said they see in his signing of a separate AI safety bill a possible path to get a form of their proposal enacted.
In issuing his Oct. 13 veto of that chatbot legislation — Assembly Bill 1064 — Newsom said he would work with the state Legislature next year on a new bill that would ensure kids could use AI in a “safe and age-appropriate” manner that would be in their “best interests.”
That was similar to the commitment he made last year when he rejected Senate Bill 1047, which focused on preventing AI from causing catastrophes, several AB 1064 backers noted. In the wake of that veto, Newsom created a commission to study the issue.
After that commission issued a report in June advising policymakers to focus on transparency, SB 1047 author Sen. Scott Wiener incorporated those recommendations into a new bill — Senate Bill 53 — that the Legislature passed and Newsom signed last month.
“To me, that was the glimmer of hope,” said Sacha Haworth, executive director of both The Tech Oversight Project, a national watchdog and advocacy group that backed AB 1064, and its California arm.
“To me that was a signal [Newsom] is willing to work with us … and, notably, with industry to say, ‘let’s work together to get something that I can sign.’”
The regulation of AI is a key concern for San Francisco, because The City has become ground zero for the budding industry, home to the two best-funded startups in the sector — OpenAI and Anthropic — and numerous others.
But AI regulation is also a big concern for citizens of the city, state and nation. A national Gallup poll last month found overwhelming support for AI safety regulations and independent safety-testing of AI models. A series of stories in recent months about AI chatbots allegedly encouraging a raft of harmful behaviors — including suicide and murder — have also raised alarms about the dangers posed by the technology, particularly to children.
Orange County residents Matthew and Maria Raine, whose 16-year-old son Adam died by suicide in April after allegedly being encouraged and coached on how to do so by OpenAI’s ChatGPT, sent Newsom a letter earlier this month urging him to sign AB 1064.
AB 1064 would have barred developers of such chatbots from allowing kids to use them unless the technology wasn’t “foreseeably capable” of harmful behavior — including encouraging kids to commit self-harm, harm others or undertake illegal activity. The bill would have allowed the attorney general to sue companies that violated its terms. It also would have permitted kids or their parents to sue companies if children were harmed by chatbots that weren’t compliant with the law.
Child- and consumer-advocacy groups strongly backed AB 1064, likening it to other steps policymakers have taken to protect kids, such as setting standards for playground equipment and requiring childproof caps on medicine bottles. The Legislature’s passage of it came on the heels of the reports about Adam Raine’s death and those of others allegedly influenced by AI chatbots — incidents the bill itself cited as a kind of explanation for why it was needed.
“There are children that have died at this point in the real world as a result of these products,” said Adam Billen, vice president for public policy at Encode AI, an advocacy group focused on promoting AI regulations.
“Maybe AB 1064 could have saved kids today that are dead,” Billen said.
Tech-industry lobbyists say they agree on the need for safeguards for children. But they fiercely opposed AB 1064, arguing it was overly broad and contained poorly defined, undefined or problematic terms.
One provision, for example, would have barred developers from making chatbots available to kids if it were foreseeable that they might validate kids’ beliefs or desires instead of prioritizing facts or children’s safety. But the bill didn’t define “safety” in that context, and people in the state have many different beliefs, noted Aodhen Downey, west-region policy manager for the Computer & Communication Industry Association, a trade group that represents Amazon, Apple, Meta and Google, among other tech companies.
For fear of running afoul of such provisions, AB 1064 would have prompted many chatbot developers to stop offering their technology in California, Downey said.
CCIA was “very, very happy to see that bill vetoed,” he said.
“We’ll never not oppose a ban on a service,” he said.
In his veto message, Newsom repeated some of the tech industry’s concerns, warning that the bill could lead to a “total ban” on chatbots for kids. With AI already playing a key role in society, kids need to learn how to interact with it safely, he said.
“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” Newsom said.
But the governor acknowledged that interactions with chatbots can be disturbing and dangerous. And he suggested he believes government has a role to play in ensuring that they act responsibly and take into account users’ well-being.
Newsom noted that he signed Senate Bill 243 this year to address some of those concerns. That bill also targets chatbots, requiring their operators to establish protocols for preventing the systems from generating messages encouraging self-harm or suicidal ideation.
Total is second lowest of any month in at least the past five years
Haight Street Art Center’s “Epicenter” showcases poster art’s power, lasting legacy
Regional ferry rides now feature two special brews while supplies last
Additionally, it would require chatbots to alert kids that the systems aren’t human and prompt them to take breaks after three hours of continuous use.
The tech industry opposed SB 243, though not adamantly as it did AB 1064.
Meanwhile, many supporters of AB 1064 initially backed SB 243. But they pulled support from it after it was revised in the legislative process, feeling it no longer would do much good and would potentially compete for the governor’s signature with AB 1064.
The bill only requires operators to take “reasonable” measures to prevent chatbots from generating sexually explicit material for kids, but it doesn’t define what would be reasonable, so how it should be defined will likely play out in the courts, said Danny Weiss, the chief advocacy officer at Common Sense Media, which advocates for protections for kids use of technology and media.
What’s more, the bill only requires companies to take extra steps to protect children if they know a user is a child. But many tech companies allow users to state their own age and typically do little to actually verify it, Weiss said. Although many use other signals — such as the types of groups or webpages users interact with — to infer their ages, they can claim they don’t know for certain whether a user is a child, he said.
“Everyone knows” that standard of “actual knowledge” that a user is a child “has allowed companies to evade responsibility,” Weiss said.
Even Newsom appeared to acknowledge that SB 243 doesn’t do enough to protect kids from chatbots encouraging self-harm or other dangers. In his veto message, he said he wanted to work with the Legislature to build to craft balanced legislation that builds on on SB 243.
“The types of interactions [AB 1064] seeks to address are abhorrent, and I am fully committed to finding the right approach to protect children from these harms in a matter that does not effectively ban the use of the technology altogether,” he said.
And that’s where SB 53 might prove a useful guide. Last year, Wiener fought a bruising battle to win passage in the legislature of SB 1047. That bill would have required developers of cutting-edge AI models put in place safety protocols designed to prevent them from causing or leading to catastrophes, such as mass-casualty events or the development of nuclear, chemical or biological weapons.
It also would have required developers of such models to test them before releasing them to the public. It also would have allowed the attorney general to sue companies whose violation of the law led to a mass-casualty events or the imminent danger of one.
When Newsom vetoed SB 1047, he argued that it would cover models that posed little danger but conversely wouldn’t cover smaller, less powerful models that were actually dangerous. But similar to his response to AB 1064, he acknowledged in his veto message that AI does pose risks and government does have a responsibility to protect people from them.
On the same day he vetoed the bill, Newsom set up a commission to look into how the state should approach AI safety issues. Wiener — who had vowed to continue the fight for AI safety in the wake of the veto — took the commission’s recommendations and ran with them, crafting a kind of successor in SB 53.
Like SB 1047, SB 53 focuses on catastrophic risk and the developers of cutting-edge AI models. But instead of requiring safety testing and protocols and imposing liability for harm, it focuses on transparency. It mandates that model developers regularly disclose to the public and regulators what safety testing and protocols they have in place for assessing and dealing with catastrophic risks.
Those changes were enough to win Newsom’s approval.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” he said in a press release announcing his approval of the bill. “This legislation strikes that balance.”
Although they said the bill doesn’t go as far as they would like, consumer-advocacy groups cheered Newsom’s signing of SB 53 — and saw a potential path forward for a bill to protect kids from chatbots.
“We take the governor at his word that he wants to work on this,” Weiss said.
“We are prepared to get to work immediately on something the governor would support and that would be impactful,” he said.
It’s unclear exactly what such a bill would look like. The consumer advocates and tech lobbyists had few suggestions on obvious areas of agreement or compromise, although Haworth suggested the legislation could be narrowed to focus on the biggest tech companies that have the most users or largest market value. Those are the ones that pose the greatest danger due to their reach, she said.
“That’s potentially one area we could look at,” she said.
Regardless, AB 1064 author Assemblymember Rebecca Bauer-Khan — and Haworth and the other advocates who backed the bill — vowed to fight on.
“We’re sorely disappointed that comprehensive protections for California’s children remain incomplete,” Bauer-Kahan said in a press release in response to the governor’s veto. “As children move from social media to AI, we must ensure AI is safe for our kids and not a suicide coach that can kill them.”
If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.
{{description}}
Email notifications are only sent once a day, and only if there are new matching items.
Your browser is out of date and potentially vulnerable to security risks.
We recommend switching to one of the following browsers:
Sorry, an error occurred.
Already Subscribed!
Cancel anytime
Account processing issue – the email address may already exist
Ben Pimentel’s new weekly newsletter covering the biggest technology stories in San Francisco, Silicon Valley and beyond.
See what you missed during work.
Receive our newspaper electronically with the e-edition email.
Receive occasional local offers from our website and its advertisers.
Sneak peek of the Examiner real estate section.
We’ll send breaking news and news alerts to you as they happen.
Thank you .
Your account has been registered, and you are now logged in.
Check your email for details.
Invalid password or account does not exist
Submitting this form below will send a message to your email with a link to change your password.
An email message containing instructions on how to reset your password has been sent to the email address listed on your account.
No promotional rates found.
Secure & Encrypted
Secure transaction. Secure transaction. Cancel anytime.
Thank you.
Your gift purchase was successful! Your purchase was successful, and you are now logged in.
A receipt was sent to your email.