AI Governance Library

THE MODEL ARTIFICIAL INTELLIGENCE LAW MAIL v.3.0

A detailed legislative framework proposing oversight, obligations, and incentives for AI systems—covering everything from foundation models and open-source exemptions to risk-based licensing and whistleblower protection.
THE MODEL ARTIFICIAL INTELLIGENCE LAW MAIL v.3.0

🔍 Quick Summary

“The Model Artificial Intelligence Law (MAIL v3.0)” is a comprehensive draft legislative framework created by AIGVERSE and coordinated through the Chinese Academy of Social Sciences. It proposes a detailed regulatory regime for the governance, promotion, and oversight of AI in China and beyond. Geared toward policymakers, regulators, and legal scholars, it merges principles from international best practices with specific enforcement mechanisms, ranging from licensing systems to ethics reviews. Unlike aspirational ethical principles or high-level strategy documents, MAIL offers a full legislative skeleton—down to licensing, penalties, and whistleblower protections.

📘 What’s Covered

Foundational Principles – Includes 11 AI governance principles (e.g., human-centricity, fairness, sustainability, ethics, accountability)

Promotion & Incentives – Covers tax breaks, R&D funding, data access rules, and free trade zone exemptions

Oversight Regime – Proposes a Negative List with licensing conditions, national registries, and differentiated obligations for developers vs. providers

Governance Obligations – Defines explainability, security assessments, open-source protections, and ethics review mechanisms

Institutional Structure – Assigns oversight to the National AI Administrative Authority, supported by ethics committees and sector-specific enforcement powers

Liability System – Spells out administrative penalties, tort rules, IP safe harbors for generative AI, and whistleblower protections

Special Provisions – Includes rules for foundational models, terminal device permissions, and cross-border applicability

💡 Why It Matters

MAIL v3.0 signals the maturation of Chinese AI governance thinking—blending proactive state support with legally enforceable obligations. It’s especially notable for its balance of promotion and control: tax incentives sit alongside mandatory audits; foundation model developers are rewarded but also face ethics reviews and transparency duties. It’s also one of the first draft laws to spell out open-source exemptions, deep synthesis labeling, and granular explainability rights. For regulators outside China, this can act as a benchmark for sector-specific rulemaking. For multinational AI firms, it’s a preview of what compliance in China could soon require.

🧩 What’s Missing

  • Enforcement mechanisms for international cooperation are vague
  • Lacks clarity on how the law aligns with existing Chinese cybersecurity and data laws
  • Some terminology (e.g., “trusted data,” “reasonable AI fairness”) would benefit from standardization
  • No clear implementation timeline—draft law awaits formal legislative process

Best For:

Legislators, regulators, policy advisors, and legal analysts studying AI-specific lawmaking, especially in Asia or in cross-border regulatory planning.

Source Details:

AIGVERSE, The Model Artificial Intelligence Law (MAIL v3.0), June 2025. Coordinated by Hui Zhou (CASS).

Thank you to the drafting team for producing one of the most detailed AI legal frameworks to date.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.