⚡ Quick Summary
This landmark 2024 report provides one of the most comprehensive overviews to date of China’s approach to AI safety governance, with a particular focus on the regulation of frontier AI models. It identifies a growing institutional interest in safety issues, the proliferation of AI ethics frameworks, and the recent pivot toward technical safety and alignment research. However, the report also highlights China’s fragmented regulatory architecture, vague legal definitions, and a gap between policy ambitions and enforcement capabilities. A key takeaway is that while China recognizes the importance of AI safety—particularly for national security and industrial competitiveness—its current governance system remains immature in terms of coordination, technical oversight, and international alignment.
🧩 What’s Covered
The report is structured around three pillars:
- Governance Institutions and Frameworks – It maps the actors involved in China’s AI safety governance, including government agencies (e.g., CAC, MIIT), standardization bodies, and research institutions like CAICT and CASIA. The authors trace the emergence of guidelines such as the “Generative AI Interim Measures” and the “Algorithm Recommendation Provisions,” noting their rapid rollout yet limited depth.
- Research and Technical Safety Capacity – The authors document a recent surge in AI safety research, mostly centered on robustness, alignment, and interpretability. Several government-funded labs and elite universities are expanding work on adversarial attacks, red teaming, and benchmark evaluations, though technical standards remain inconsistent.
- Industry Practice and Compliance Trends – The report reviews how leading Chinese AI firms (e.g., Baidu, Alibaba, iFLYTEK) are integrating safety compliance, especially through data filtering, model pre-alignment, and internal risk audits. These practices are largely reactive to regulation, and their effectiveness is difficult to independently verify.
Notably, the timeline graphic on page 4 charts the accelerated policy activity since 2021, showing over 15 distinct governance measures introduced within three years. The table on page 12 categorizes different AI safety research themes and lists Chinese institutions active in each.
💡 Why it matters?
This report is essential for understanding China’s position in the global AI governance landscape. With frontier AI capabilities rapidly scaling, China’s choices on safety regulation will significantly affect international risk trajectories. The document dispels the myth that China ignores AI safety; on the contrary, its approach is increasingly institutionalized—just not yet operationally mature. For policymakers, this provides a nuanced picture: engagement with Chinese institutions on AI safety is possible and increasingly relevant, but expectations should be calibrated to the country’s legal and bureaucratic realities.
❓ What’s Missing
- Little analysis is given to how China’s AI safety efforts relate to its geopolitical strategy or military-civil fusion goals.
- The role of public input and civil society is underexplored, possibly due to data scarcity or political sensitivities.
- Enforcement challenges are acknowledged, but the report avoids specifics on how or whether non-compliant firms are penalized.
- There is limited comparison to international standards (e.g., NIST RMF, OECD principles), which could help contextualize China’s regulatory stance.
👥 Best For
- AI governance researchers comparing regulatory systems
- Policy advisors seeking to engage Chinese counterparts
- Technical alignment researchers exploring global collaboration
- Risk analysts tracking the safety-readiness of AI power centers
📄 Source Details
- Title: State of AI Safety in China
- Authors: Zeng Yi, Zhang Zikai, et al.
- Institution: Institute of Automation, Chinese Academy of Sciences (CASIA)
- Published: 2024
- Language: English
- Pages: 24
📝 Thanks to
This review is based on the work of Zeng Yi and the research team at CASIA. Their ongoing efforts to increase transparency around China’s AI policy landscape are a critical contribution to global safety dialogues.