✒️ Foreword
AI governance is having its “cloud security” moment: everyone’s finally agreeing on the fundamentals, even as the implementation details keep splintering. This issue’s spotlight resources circle the same problem from different altitudes—crosswalk, operating system, and architecture—and the overlaps are the point.
Start with the Global AI Governance Alignment Map: it’s the clearest reminder that governance isn’t a pile of checklists, it’s a set of repeatable anchors. Risk management, transparency, and security-by-design keep reappearing—whether you’re reading law, a framework, or a voluntary code. The map also names what’s driving real-world fatigue: the friction lives in enforcement and assurance expectations, and in the messy mechanics of incident reporting. The pragmatic answer here is evidence portability: build one governance “file” you can reuse, map, and defend.
Then Understanding ISO/IEC 42001 brings that “file” to life as a management system. It treats AI oversight as continuous improvement, not a launch gate—leadership accountability, impact assessments, monitoring, corrective actions. It also makes the certification angle feel less abstract: a trust signal that your controls are not just written down, but operating across the lifecycle and supply chain.
Architectures of Global AI Governance zooms out further and asks why all of this still feels fragmented even when everyone agrees in principle. The answer: regime complexity. Overlapping institutions, competing mandates, and fast-moving technical change create governance that’s inherently non-linear. Coordination won’t come from one perfect framework; it comes from designing for the mess.
And the extra thread tying it together is security reality: you don’t invent “new security” for GenAI—you extend fundamentals to new behaviors, new flows, and new failure modes. Which leaves the real question: if governance is now about portable proof, are you building documents—or building a system that can withstand the next incident, audit, or border crossing?
— Kuba
Curator, AIGL 📚
☀️Spotlight Resources

GLOBAL AI GOVERNANCE ALIGNMENT MAP
What it is: A November 2025 Responsible AI Trust flagship brief (“Global AI Governance Alignment Map”) mapping how major AI laws, standards, and voluntary codes align and diverge across jurisdictions.
Why it’s worth reading: Instead of treating governance as a pile of separate checklists, the brief argues the world is converging on three “anchors”: risk management, transparency, and security-by-design—showing up across the EU AI Act, NIST AI RMF, and ISO/IEC 42001 (and cybersecurity overlays like NIS2 and the Cyber Resilience Act). It also calls out where things still split sharply: assurance/enforcement, clarity vs. flexibility, and incident reportingthresholds and formats—practical reasons teams feel “compliance fatigue.” A concrete takeaway is to build a “portable AI governance file” (risk register, evaluation results, incident logs, AI-BOM, provenance documentation) as your cross-border evidence pack.
Best for: Compliance, security, and product leaders who need one crosswalk view of “what applies where” and what’s likely to become baseline next.

Understanding 42001
What it is: A short guide for Australian business from Standards Australia and CSIRO’s National Artificial Intelligence Centre explaining AS ISO/IEC 42001:2023—an AI management system standard for establishing, implementing, maintaining, and continually improving AI governance (pp. 1, 5, 8).
Why it’s worth reading: It frames 42001 as a practical “checks and balances” management-system approach to AI—covering impact assessments, controls/monitoring, and continuous improvement, rather than treating AI as a one-time launch checklist (pp. 8–9). It also makes the certification angle concrete: third-party certification is positioned as a “trustmark” that signals baseline responsible AI processes across the supply chain (pp. 12–13). The guide is especially useful if you’re trying to translate “trustworthy AI” into operational governance and accountability, including guardrails for bias, privacy, and decision autonomy (pp. 6, 10–12, 18).
Best for: Governance, risk, compliance, and product leaders who need an ISO-style way to structure AI oversight—and explain it clearly to executives, customers, or auditors (pp. 5, 11–13).

Architectures of Global AI Governance
Governing AI in a World of “Regime Complexity”
What it is: Architectures of Global AI Governance: From Technological Change to Human Choice (Oxford University Press, 2025) is an open-access book by Matthijs M. Maas on how global AI governance can be designed to cope with fast-moving technical, legal, and geopolitical change.
Why it’s worth reading: Maas argues that current AI governance debates often stay reactive and fragmented, leaning on “shallow analogies” to past institutions—risking delayed or ineffective coordination. The book’s core move is to frame global AI governance through three lenses—sociotechnical change, governance disruption, and regime complexity—so proposals can better fit the messy reality of overlapping institutions and mandates.
Best for: Policy teams, governance leads, and researchers who need a practical way to think about fragmented global AI initiatives and how they might evolve (or be steered) over time.