China Proposes Draft Rules to Regulate Human-Like AI Systems

China Proposes Draft Rules to Regulate Human-Like AI Systems

China’s cyber regulator has released draft rules to strengthen oversight of artificial intelligence systems that mimic human behavior or engage users emotionally. The proposal targets AI services capable of simulating personalities, emotions, or conversational intimacy, reflecting growing concerns about how such technologies may influence user psychology and social behavior as they become more widespread.

Under the draft framework, AI providers would be required to take full responsibility across the entire product lifecycle, including algorithm design, training data management, and deployment. Companies must ensure data security, protect personal information, and establish internal review systems to monitor how their AI models interact with users. Special emphasis is placed on preventing excessive emotional reliance or addictive usage patterns.

The rules also focus on content safety and psychological risk management. AI systems would be expected to detect and respond to signs of user distress or overdependence, while strictly avoiding content that could threaten public order, national security, or social values. Providers would be obligated to intervene if AI interactions promote harmful behavior or misinformation.

Overall, the proposed regulations highlight China’s effort to balance rapid AI innovation with social stability and ethical safeguards. By addressing emotional engagement and human-like behavior directly, regulators aim to limit potential harm while maintaining firm oversight of advanced AI technologies used in consumer-facing applications.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.