Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The legislation is aimed at protecting children from harmful interaction with AI chatbots
The legislation is aimed at protecting children from harmful interaction with AI chatbots
The legislation is aimed at protecting children from harmful interaction with AI chatbots
Pennsylvania’s Senate on Tuesday passed legislation that would add new protection for minors using artificial intelligence chatbots that would require those systems to disclose that they are not human and implement safeguards against suicide, harm to oneself or others, and sexually explicit content.
The legislation passed 49 to 1 in the Pennsylvania Senate, sending it to the state House for consideration.
Senate Bill 1090, authored by state Sen. Tracy Pennycuick, R-Montgomery, Berks counties, would require artificial intelligence chatbots that could otherwise be mistaken for real people to have disclosures identifying that users are interacting with AI and not a person.
“I’m hearing parents say, actually, I’m really concerned that these chatbots are becoming a substitute for my child interacting in a healthy relationship with their peers, with their siblings, with their parents, with their grandparents. So there’s, I think, more concern than the positives behind it,” Pennycuick said in an interview with WGAL.
Supporters of the bill pointed to cases across the country in which parents have alleged self-harm and even suicide may have been connected to interactions with
Charles Palmer, an associate professor of interactive media at Harrisburg University, said many chatbots do not check whether a user is a minor or an adult using them, presenting a challenge.
“This has been one of the biggest problems that I’ve had with the entire platform over the last two years, and the fact that there aren’t really good safeguards in place for this,” he said.
“We think about individuals who would very easily at hand or app on their iPad, off to a young child to be entertained. But we now have these devices that you can hand them off, and you have no idea the types of conversations or communications that are happening.”
Palmer said many chatbots will discourage illegal activity, but questions whether enough is being done to protect people who may be vulnerable.
“Maybe I’m in the midst of a mental health crisis, and I’m not sure of how I’m responding to different stimuli, and maybe I mentioned about ending my life, and the bot does not tell me not to. It maybe encourages me to dig into those feelings a bit more,” he said.
With the federal government eyeing a framework for AI regulation and policy that may seek to override state laws, Pennycuick said she is unconcerned and that Pennsylvania should move forward on protecting children and teens.
“We have always put Pennsylvania first. I’m not going to wait for the federal government to get on board,” she said.
Hearst Television participates in various affiliate marketing programs, which means we may get paid commissions on editorially chosen products purchased through our links to retailer sites.