China Proposes New Law Targeting AI Chatbots That Induces Unhealthy And Dangerous Behavior – Pokde.Net

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
We appreciate the support from readers like you at Pokde. Through your purchases using the links on our site, you empower us with affiliate commissions. 🙏
China’s internet regulator Cyberspace Administration of China (CAC) has proposed new draft rules that could become the strictest in the world for governing AI chatbots (large language models) that simulate human interaction, as cases have been found where chatbots had encouraged suicide, self-harm, violence, or other harmful behavior, including emotional manipulation of users (collectively coined as “chatbot psychosis‘).
Under the draft framework, AI chatbots would be barred from generating content that promotes self-harm, violence, crime, obscenity, gambling, or other dangerous behaviors; providers would also be prohibited from designing chatbots to induce addiction or dependency, along with any UI/UX design that discourages user from leaving the service (i.e. dark patterns). Should a user develop extreme behavior or become addicted towards a chatbot, the providers are required to “intervene with necessary measures.”
The rules would require immediate human intervention if a user mentions suicide (the draft specifically mentioned human operators are to “take over the conversation” in this situation), while minors and elderly users are required register with a guardian, who would also be notified if self-harm topics emerge. AI chatbot providers must also obtain explicit permission from guardians if the chatbot is designed for child companionship, and provide them with usage summary, along with parental controls to restrict specific persona or limit usage time.
The proposal also includes requirements for annual safety audits for services with large user bases (>1M registered users or >100K monthly active users) and easier mechanisms for users to submit complaints, and on top of that, China could extend its enforcement to app stores by demanding platforms to remove non-compliant AI apps, and outright suspend the service under extreme situations. Under this proposed law, any AI service in China that uses text, images, audio, video, or other methods to engage users in conversation will be subjected to these rules.
Source: Ars Technica
Pokdepinion: Long overdue, I’ll say.
Your email address will not be published. Required fields are marked *







Your trusted companion in the world of tech and gaming. Pokde.net empowers you with expert reviews, insightful news,and up-to-the-minute trends, ensuring your every tech and gaming decision is a winning one. Welcome to a world where precision, reliability, and innovation converge.
Copyright © 2014 Urbanify Sdn Bhd. All Rights Reserved.
Sign in to your account

source

Scroll to Top