China Proposes Strict AI Chatbot Regulations to Prevent Harm – mezha.net

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
As Ukrinform reports, this is reported by Arc Technica.
The Cyberspace Administration of China has released a draft of new rules aimed at reducing the harm caused by chatbots and preventing calls for suicide, self-harm, and violence.
According to sources, the document describes a set of norms designed to reduce the emotional impact of AI on users and protect vulnerable populations.
“China has developed landmark rules to prohibit AI-powered chatbots from emotionally manipulating users; in particular, this could become the world’s strictest policy aimed at preventing suicides, self-harm, and violence, supported by artificial intelligence”
If the draft becomes law, it could be regarded as one of the toughest approaches to the emotional impact of artificial intelligence in the modern world.
The rules would cover all AI products and services available in China that imitate human communication through text, images, audio, or video. The regulation targets chatbots and other systems that can affect users’ emotional state.
A mandatory human-in-the-loop is provided when suicide or self-harm is mentioned in the conversation. At registration, minors and the elderly must provide guardian contact details. If discussions of suicide or self-harm are detected, the AI must notify the guardian.
It is also forbidden to create or disseminate content that encourages suicide, violence, or self-harm, as well as to emotionally manipulate users, for example by promising false benefits or pushing them toward irrational decisions.
Additionally, the regulator bans the promotion of gambling, criminal activity, indecency, insults, and defamation against users. It also mentions so-called emotional traps that can lead to dependence on chatbots.
Within the scope of the topic, there are also plans for ByteDance to increase AI spending to 160 billion yuan next year to stay competitive on the global market.
Such steps illustrate growing interest in regulating artificial intelligence in China and could influence global trends in the responsible use of technology and online safety.
Other news you may find interesting:

source

Scroll to Top