China Proposes Rules Requiring AI Chatbots to Monitor Users for Addiction – Unite.AI

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
By
China’s cyber regulator released draft rules on Saturday that would require AI companion chatbots to monitor users’ emotional states and intervene when signs of addiction appear—the most aggressive regulatory response yet to growing concerns about psychological harm from AI-powered relationships.
The proposed regulations from the Cyberspace Administration of China target AI products that simulate human personalities and form emotional connections with users through text, images, audio, or video. Under the draft, providers would need to warn users against excessive use, assess emotional dependency levels, and take action when users exhibit extreme emotions or addictive behavior.
Users must be reminded they’re interacting with AI when logging in and at two-hour intervals—or sooner if the system detects signs of overdependence. The rules would also hold providers responsible for safety throughout their products’ lifecycle, including algorithm review, data security, and personal information protection.
The timing isn’t accidental. As China’s generative AI user base doubled to 515 million over the past six months, concerns about the psychological impact of AI companions have grown in parallel. A Frontiers in Psychology study found that 45.8% of Chinese university students reported using AI chatbots in the past month, with users exhibiting significantly higher levels of depression compared to non-users.
China isn’t alone in regulating AI companion chatbots. California became the first US state to pass similar legislation in October when Governor Gavin Newsom signed SB 243 into law. That bill, set to take effect January 1, 2026, requires platforms to remind minors every three hours that they’re speaking to an AI—not a human—and to take a break.
SB 243 also mandates age verification, prohibits chatbots from representing themselves as healthcare professionals, and prevents minors from viewing sexually explicit AI-generated images. The law allows individuals to sue AI companies for violations, seeking up to $1,000 per incident plus attorney’s fees.
The concern isn’t simply screen time. A March 2025 MIT Media Lab study found that AI chatbots can be more addictive than social media because they learn what users want to hear and provide that feedback consistently. Higher daily usage correlated with increased loneliness, dependence, and what researchers termed “problematic use.”
The psychological warning signs identified in clinical literature include prolonged sessions disrupting sleep, emotional dependence and distress when access is restricted, preferring conversations with chatbots over real human interaction, and anthropomorphizing AI—believing it possesses human-like feelings and treating it as a genuine confidante or romantic partner.
China’s draft rules attempt to address these risks at the platform level rather than relying on individual user judgment. By requiring providers to monitor emotional states and dependency levels, the regulations shift responsibility to companies building these systems. This approach differs from earlier AI regulation focused primarily on content moderation and data security.
The draft also sets content restrictions, prohibiting AI companions from generating material that endangers national security, spreads rumors, or promotes violence or obscenity—provisions that echo China’s existing generative AI regulations.
Mandating that companies detect addiction and intervene sounds straightforward in policy language. Implementation is another matter. Defining what constitutes “excessive use” or “extreme emotions” in a way that’s both meaningful and enforceable will test regulators and companies alike.
Too sensitive, and the system becomes annoying—interrupting users who are simply engaged in extended conversations. Too lenient, and vulnerable users slip through without intervention. The two-hour reminder requirement provides a blunt instrument, but the more nuanced requirement to detect overdependence “when signs can be detected” leaves significant interpretive room.
Companies building guardrails for AI applications have struggled with similar challenges. Content filters are notoriously imprecise, and adding psychological monitoring introduces new dimensions of complexity. Determining whether a user is forming an unhealthy attachment requires inferring mental states from text—a capability that AI systems don’t reliably possess.
The draft is open for public comment, with final regulations expected sometime in 2026. If implemented as proposed, China would have the world’s most prescriptive framework for governing AI companion products.
The simultaneous regulatory action in China and California suggests that concerns about AI companion addiction have reached critical mass across different political systems.
For AI companies, the message is increasingly clear: the unregulated era of AI companions is ending. Whether through Chinese administrative law, California civil liability, or eventual federal legislation in the United States, platforms will face requirements to protect users from their own products.
The question isn’t whether regulation is coming—it’s whether the interventions being designed will actually work. China’s approach of mandating monitoring and intervention may prove difficult to implement in practice.
What’s clear is that the AI companion market has grown too large and too consequential for governments to ignore. The chatbots people form emotional bonds with are no longer curiosities—they’re products used by hundreds of millions, with documented cases of severe harm. Regulation, however imperfect, was inevitable. The debate now shifts to whether the specific rules being proposed will protect vulnerable users without stifling a technology that many find genuinely valuable.
Most U.S. Tech Executives Want AI Regulation, But Who Should Lead it?
Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.
China Warns of Bubble Risk as 150 Companies Flood Humanoid Robot Market
How Much Will the EU AI Act Actually Impact Your Business?
China’s Generative AI Users Hit 515 million, Doubling in Six Months
Most U.S. Tech Executives Want AI Regulation, But Who Should Lead it?
China’s AI Mirage: How “Open Source” Hides What Matters Most
US Sanctions Backfire: Huawei’s AI Chips Accelerate China’s Self-Reliance
Advertiser Disclosure: Unite.AI is committed to rigorous editorial standards to provide our readers with accurate information and news. We may receive compensation when you click on links to products we reviewed.
Copyright © 2025 Unite.AI

source

Scroll to Top