A recent paper authored by legal scholars along with AI-industry researchers from DeepSeek and Alibaba argues that China’s evolving AI regulatory framework — often criticized abroad — is actually a well-designed, “innovation- and openness-friendly” system. The authors claim that rather than stifling development, the regulations have helped foster a robust ecosystem around open-source AI, enabling companies like DeepSeek and Alibaba to build globally competitive AI models.
At its core, the framework relies on a patchwork of existing rules — including pre-deployment filing requirements for AI models and content-safety self-assessments — alongside exemptions for open-source models and AI used in scientific research. The authors present this as a pragmatic, flexible governance approach: instead of a single sweeping law, China is iterating regulations to match evolving risks and technologies.
But they also acknowledge shortcomings. The paper highlights that when an AI model filing is rejected, developers often don’t receive meaningful feedback — making it hard to improve compliance. Also, they warn that broad exemptions for open-source AI could enable deployment of “frontier models” with substantial risk, unless transparency and evidence-based safety checks are scaled up.
Overall, the authors argue that China has evolved from an AI-policy follower to a leader in AI governance — balancing openness and innovation with emerging safety and oversight concerns. At the same time, they call for stronger accountability mechanisms and, eventually, a unified national AI law to provide clearer, consistent regulation.