✒️ Foreword

There’s a quiet assumption sitting underneath much of today’s AI governance work: that somewhere out there, there’s a framework that will finally “solve it.”
Pick the right standard. Implement the right controls. Follow the right model.
And things will fall into place.
But the more you look across how organizations actually manage AI risk, the harder it is to sustain that belief. What emerges instead is fragmentation—not as a failure, but as a defining feature. One framework focuses on risk documentation. Another on management systems. Another on regulatory capability. Yet another maps vendors trying to fill the gaps left by all the others.
Each one is coherent on its own. None is sufficient in isolation.
That’s the uncomfortable reality: AI governance is not a single system you adopt—it’s an ecosystem you assemble. And that ecosystem is inherently uneven. Different teams operate at different levels of maturity. Different risks surface at different stages. Different tools solve different parts of the problem.
The real work, then, isn’t choosing the “best” framework.
It’s learning how to combine imperfect ones without creating blind spots in between.
Because fragmentation doesn’t just create complexity—it creates seams. And those seams are exactly where risk tends to hide.
So the question isn’t whether we can unify AI governance into one model.
It’s whether we’re getting better at navigating the gaps between them.
— Kuba
Curator, AIGL 📚
☀️Spotlight Resources

Managing AI Risk with Four Practical Building Blocks
What it is: A 2024 Cloud Security Alliance framework outlining how organizations can manage risks in AI/ML models through structured practices and documentation.
Why it’s worth reading: The paper breaks model risk management into four concrete pillars—Model Cards, Data Sheets, Risk Cards, and Scenario Planning—showing how they work together as a continuous feedback loop to identify, assess, and mitigate risks. It moves beyond abstract governance by detailing specific risk sources like biased data, hallucinations, and operational failures, and connects them to actionable tools (e.g., documenting datasets or simulating “what if” misuse scenarios). A key takeaway: effective AI governance is not a one-time assessment but an iterative process combining documentation, testing, and monitoring.
Best for: AI governance professionals, risk and compliance teams, and ML practitioners looking for a structured but practical way to operationalize model risk management across the lifecycle.

Making Sense of ISO 42001 (Without the Jargon)
What it is: A 2026 guide by Advisera explaining ISO/IEC 42001:2023 — an international standard for managing AI systems through a structured “Artificial Intelligence Management System” (AIMS).
Why it’s worth reading: This guide breaks down ISO 42001 into practical terms, showing how AI governance is fundamentally about risk assessment and control design. It walks through real examples (e.g., chatbot risks like hallucinations or data leakage) and explains how organizations should identify risks across company, user, and societal levels, then apply controls accordingly.
It also clarifies a key insight: ISO 42001 doesn’t just define compliance—it operationalizes governance through policies, roles, and continuous monitoring, making AI systems more “trustworthy” in practice.
Best for: AI governance professionals, compliance leads, and legal teams looking for a structured, implementation-oriented view of ISO 42001 without diving straight into the standard text.

Can Your Regulator Handle AI? (Turing Institute Framework)
What it is: A 2025 framework by the Alan Turing Institute and UK Department for Science, Innovation and Technology that helps regulators assess and build their capability to govern AI effectively.
Why it’s worth reading: The report moves beyond abstract AI governance principles and breaks regulation into something actionable: 28 concrete activities across six lifecycle stages (from agenda-setting to policy updates). It pairs this with six capability factors—like legal authority, technical infrastructure, and organisational culture—to show what “good” looks like in practice. The included self-assessment tool adds real utility, allowing organisations to score their readiness and identify capability gaps using both quantitative ratings and qualitative evidence.
Best for: Regulators, policy teams, and AI governance professionals who need a structured way to evaluate readiness—not just design rules, but actually implement and sustain them across complex systems.