✒️ Foreword
Us is just falling behind in AI, but it certainly is vacating the seat at the governance table—and China is filling it.
While the U.S. frames AI as a tool for innovation, economic growth, and military advantage, China is building something far broader: a full-stack model of AI governance—from research infrastructure and talent to influence in global norms.
Let’s be clear: China doesn’t just lead in AI research output. In 2024, it published more AI papers than the U.S., UK, and EU combined. It pulled in 40% of global AI citations, compared to just 10% each for the U.S. and EU . Its patent filings now outpace the U.S. by nearly 10 to 1. Yes, there are valid criticism of those numbers. Yes, they can not be 100% trusted. But still, even if a fraction of this effort is real, it is seismic.
The newly released America’s AI Action Plan barely mentions regulation, oversight, or enforcement. Instead, it actively rolls back prior safeguards: Executive Order 14179 removes barriers to innovation and encourages agencies to rescind prior AI-related rules . The Plan proposes to strip the NIST AI RMF of all references to misinformation, equity, and climate—essentially gutting its ethics layer .
In fact, the White House now frames regulation as a competitive threat. Federal AI funding is to be steered away from states with strong AI rules . The Plan praises states’ rights—unless those rights involve slowing AI development for risk management. It’s a clear signal: America will not lead on rules. It will lead on speed.
Meanwhile, China is becoming the world’s top AI collaborator, overtaking the U.S. as the UK’s most frequent AI research partner . The UK now relies on China for AI collaboration six times more than China relies on the UK. Talent, once thought to flow West by default, is now flowing toward China. For 20 years, the U.S. and UK have been net donors of AI researchers to China .
And when it comes to governance infrastructure, China’s model is quietly setting a precedent: decentralized but coordinated, with 156 institutions publishing 50+ papers a year, compared to just 54 in the EU . It’s not betting on just a few elite labs—it’s embedding AI across the national research fabric.
So what happens when China becomes the de facto standard-setter?
When its partnerships, talent flows, and technical norms shape what “responsible AI” looks like—especially in countries without their own frameworks?
What happens when Chinese LLMs become the backbone of public services in lower-income countries—complete with baked-in cultural values, speech norms, and security architectures? The U.S. Plan says it wants to export “American AI values” to allies . But exporting what, exactly? With no federal guardrails left, what is the U.S. actually modeling?
The AI governance race isn’t just about taming frontier models.
It’s about setting the default expectations for how AI gets built, reviewed, and deployed globally.
And here’s the uncomfortable truth:
The U.S. just signaled it’s not interested in governing.
China never stopped.
—Jakub
Curator, AIGL 📚
☀️Spotlight Resources
The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence, edited by Nathalie A. Smuha (2025)
🔍 Quick Summary
This open-access handbook edited by Nathalie A. Smuha brings together top scholars to tackle the legal, ethical, and policy implications of AI—focusing especially on Europe. It’s a sweeping, structured, and multidisciplinary resource that doubles as a syllabus and a reference guide for researchers, practitioners, and regulators.
📘 What’s Covered
The handbook is split into three major sections:
- AI, Ethics, and Philosophy – Discusses foundational ethical frameworks, fairness, responsibility gaps, and sustainable AI. Chapters emphasize design for values, moral philosophy, and systemic risks (e.g., Gry Hasselbalch & Aimee Van Wynsberghe on power and sustainability).
- AI, Law and Policy – Explores GDPR, tort law, competition, consumer protection, IP, and a critique of the EU AI Act (by Smuha and Yeung). This section confronts the balance between innovation and regulation and reveals the disconnects between policy ambition and enforcement mechanisms.
- AI Across Sectors – Sectoral deep-dives include education, healthcare, media, financial services, law enforcement, labor, and military. Each chapter contextualizes real-world deployments with legal and ethical commentary.
The book builds from the KU Leuven Summer School curriculum and combines academic rigor with a didactic structure. Each chapter stands alone but contributes to a cohesive understanding of AI’s societal impact.
💡 Why It Matters
This is one of the most complete overviews of AI governance to date—with over 20 contributors and coverage from foundational philosophy to concrete regulatory challenges. What sets it apart is the attention to interdisciplinary translation: how technical design intersects with legal enforceability and ethical reasoning.
It’s especially valuable for policymakers, EU stakeholders, and educators building curriculum. Smuha’s introduction is a standout—it reframes AI not as a sui generis disruptor, but as a technology embedded in recurring historical patterns of governance. That humility and precision are rare in AI discourse and necessary for moving beyond hype.
🧩 What’s Missing
The European focus is deliberate, but the global context is thin—less attention is given to governance regimes in the Global South, US policy experimentation, or China’s standard-setting ambitions. The AI Act critique is strong, but there’s little on how governance intersects with procurement or institutional capacity. While the book includes sectoral case studies, there are few direct policy toolkits or implementation checklists—making it more of a theoretical and analytical resource than a practitioner’s manual.
AIGL Newsletter Spotlight Review for the Agentic AI Governance Framework 1.0
🔍 Quick Summary
AIGN’s Agentic AI Governance Framework 1.0 sets out a practical structure for assessing and governing AI agents that act with autonomy, initiative, and persistence. It’s a lean, principles-based guide that helps organizations anticipate accountability, control, and safety challenges specific to agentic systems.
📘 What’s Covered
The framework begins by defining agentic AI—systems that don’t just respond to inputs, but proactively set goals, make decisions, and adapt strategies over time. The document flags core risks like value misalignment, emergent behavior, and systemic brittleness that existing AI policies don’t yet cover well.
From there, AIGN proposes a modular governance approach structured around four layers:
- Agent Capabilities: Outlining what the system is able to do—including task selection, environment interaction, and autonomy boundaries.
- Control Structures: Addressing how humans can shape, constrain, or audit agent behavior through feedback, overrides, or sandboxing.
- Responsibility Allocation: Clarifying who is accountable—especially when agents act semi-independently across multiple organizational layers.
- Oversight Maturity: Offering tiered levels of organizational preparedness, from minimal policy presence to proactive agent-specific governance.
The framework emphasizes human judgment and adaptive risk management rather than fixed compliance checklists. It complements (but doesn’t replicate) the NIST AI RMF or ISO 42001, and is meant to evolve as agentic systems mature.
💡 Why It Matters
This is one of the first public frameworks laser-focused on agentic AI—an area moving faster than policy. It meets the moment: tools like AutoGPT, open-source agents, and multi-agent simulations are already spreading into commercial workflows with limited guardrails.
The document doesn’t just name the risks; it offers a scaffold to think about them systematically. That makes it especially helpful for governance leads, safety researchers, or product managers needing to prototype internal oversight for agent-based tools.
For orgs already using task agents or planning AI autonomy in operations, it’s a timely north star—even if it won’t answer every implementation question.
🧩 What’s Missing
The framework is intentionally high-level and conceptual. There are no worked examples, case studies, or sector-specific controls. Some key questions—like “how much autonomy is too much?”, “what triggers escalation?”, or “how to audit agent decisions post-hoc?”—are left for implementers to resolve.
It also doesn’t deeply engage with regulatory overlays (e.g., how agentic AI maps to the EU AI Act’s high-risk classification), nor does it align explicitly with certification schemes or assurance methods.
Still, this is Version 1.0—and the authors frame it as a launchpad, not a final word.
The AI Policy Playbook: Navigating AI Policy Development through an African and Asian Lens
🔍 Quick Summary
This playbook offers firsthand insights from policymakers in Africa and Asia who are shaping national AI strategies. It’s a practical guide for low- and middle-income countries looking to build context-sensitive, inclusive, and sustainable AI governance from the ground up.
📘 What’s Covered
Rather than proposing a universal blueprint, this playbook documents how seven countries—Ghana, India, Indonesia, Kenya, Rwanda, South Africa, and Uganda—are crafting AI policies aligned with local realities. It’s the result of peer exchanges facilitated by the Africa-Asia AI Policymaker Network and GIZ’s FAIR Forward initiative.
Three key sections organize the material:
- Key Findings & Lessons – Shared themes include the importance of local digital foundations, the need for capacity-building before implementation, and tailoring AI policy to existing regulatory structures like data protection and cybersecurity.
- Country Pages – Each country’s policy journey is laid out, highlighting institutional actors, barriers (like compute access or skills gaps), and draft strategy components.
- Quick Tips & Tools – These include insights on stakeholder consultations, how to align with SDGs, and reflections on peer learning. While not formatted as checklists, they offer operational cues.
The document doesn’t promote any one policy model. Instead, it emphasizes local ownership, pragmatism, and iteration, while surfacing examples from UNESCO’s Ethics of AI, the AU Continental Strategy, and regional frameworks like the ASEAN AI Guide.
💡 Why It Matters
Most AI governance templates are written by and for high-capacity, resource-rich governments. This playbook flips that script. It highlights how Global South policymakers can shape AI ecosystems that serve their societies—on their own terms.
The playbook’s strength lies in its honest accounting of process: where capacity gaps delayed progress, how peer mentoring sped up drafting, and why aligning with existing national priorities mattered more than copying from the EU or U.S.
It’s not a list of best practices. It’s better: a record of what real policymaking looks like—messy, political, grounded. That makes it a valuable resource not just for Global South actors but also for donors, consultants, and international bodies supporting inclusive digital governance.
🧩 What’s Missing
This is not a regulatory playbook. There’s no guidance on legal drafting, enforcement mechanisms, or how to operationalize AI risk classifications. It also avoids deep technical discussions about frontier models or generative AI.
While the country pages are rich in context, some lack detail on implementation—because in many cases, implementation hasn’t started. This limits its utility for those looking to audit or benchmark performance.
Also, while peer learning is central, the playbook could benefit from stronger visualizations or comparative charts to draw out shared bottlenecks and innovations across countries.

🌙 After Hours
Turns out governance is older than your inbox. Way older.
📜 Sumerian Myth, Just Got a Patch Update
Researchers have finally translated a long-missing chunk of an ancient Sumerian story — Enmerkar and En-suhgir-ana— written over 4,000 years ago on clay tablets. It’s part of a mythological saga full of rival city-states, diplomatic messengers, and linguistic flexing. This new piece fills in gaps where one king challenges another through a battle of words and wit. No armies, no swords — just rhetoric and regional power dynamics.
Basically, it’s inter-city governance fanfiction carved into stone. And now, with the recovered lines, we see more nuance in the relationship between diplomacy, language, and divine authority. The myth even foreshadows ideas of trade negotiations and soft power. Wildly relevant if you squint at today’s geopolitics.
📦 The First-Ever Customer Complaint (Seriously)
Picture it: Mesopotamia, ~1750 BCE. A guy named Nanni receives a shipment of copper and is absolutely not having it. He pulls out his stylus and carves the world’s first known written complaint into a clay tablet. He accuses the supplier — Ea-nasir — of sending low-quality goods and ghosting his courier.
The tablet is pointed, passive-aggressive, and beautifully bureaucratic. “What do you take me for, that you treat somebody like me with such contempt?” Nanni rants. It’s basically an ancient version of “per my last email.” And yes, it has its own Guinness World Record.