AI Governance Library

AIGL Newsletter #14: Xmas Special

This issue highlights the newest and best in AI governance — the ideas shaping oversight, risk, and policy today. Future editions will explore new lenses: practical tools, policy guides, and training resources. Each one will bring a fresh perspective — and your voice is welcome.
AIGL Newsletter #14: Xmas Special

✒️ Foreword

As the year comes to a close, this edition is less about looking ahead and more about looking back—with a sense of appreciation. The last twelve months moved fast for everyone working in and around AI governance, and this issue is our way of pausing to acknowledge what helped us keep up: the resources that shaped how we think, work, and build.

This was a year when AI governance clearly came into its own. 77% of organizations now report actively working on AI governance. But perhaps most telling, 30% of organizations not yet using AI are already investing in governance first—choosing structure and foresight over speed.

Along the way, the community itself matured. New roles appeared, existing teams stretched across disciplines, and conversations moved from theory to practice. AI governance stopped living on the margins of privacy or compliance and began standing on its own—still connected, but clearly distinct.

Before I wrap this up, a small personal note.

What started as a tiny side project — a place to save a few interesting AI governance links — somehow grew into the AI Governance Library. Today it’s a repository of 220+ curated resources, supported by 14 issues of this newsletter and read by 1,750 engaged subscribers. That still feels slightly unreal when I write it down.

I’m genuinely grateful to everyone who reads, shares, replies, recommends resources, and challenges my thinking. This project exists because of curiosity, conversations, and a community that cares about doing AI well, not just fast. Thank you for being part of it — whether you joined in issue one or five minutes ago.

As the year comes to a close, I wish you a calm end-of-year slowdown, good conversations, and just enough distance from notifications. May your holiday break be low-risk, well-governed, and fully human.

— Kuba
Curator, AIGL 📚

PS. This issue is fully open and accessible to everyone — both paid and free subscribers. Let it be my small holiday gift: the same full value, insights, and resources for the entire community, shared equally. Because knowledge grows best when it’s open. Merry Christmas — ho ho ho! 🎅

☀️Spotlight Resources

THE LAW, ETHICS AND POLICY OF ARTIFICIAL INTELLIGENCE

What it is: An open-access academic handbook edited by Nathalie A. Smuha, published by Cambridge University Press, bringing together multidisciplinary perspectives on the legal, ethical, and policy implications of artificial intelligence, with a strong focus on Europe.

Why it’s worth reading: The handbook doesn’t treat AI as a purely technical issue. Instead, it examines how AI interacts with fundamental rights, democracy, and the rule of law, combining legal analysis with ethical and philosophical inquiry. It explores recurring governance challenges such as liability, accountability, transparency, and risk-based regulation, including detailed discussion of European policy instruments like the AI Act. A recurring theme is the difficulty of translating high-level ethical principles into concrete practices, a gap the book openly acknowledges rather than glosses over. The structure—moving from ethics and philosophy to law and sector-specific impacts—helps readers understand AI governance as a system, not a checklist.

Best for: Legal professionals, policymakers, compliance leaders, and researchers who want a grounded, European-oriented overview of AI governance that connects ethics, law, and real regulatory choices, rather than focusing only on technical design or abstract principles.

AI Policy Template (June 2024)

What it is: A June 2024 AI Policy Template developed by the Responsible AI Institute to help organizations build a foundational, organization-wide AI policy aligned with ISO/IEC 42001 and the NIST AI Risk Management Framework.  

Why it’s worth reading: This document goes far beyond high-level principles. It lays out a detailed, end-to-end policy structure covering governance, data management, risk management, procurement, workforce readiness, and regulatory compliance across the full AI lifecycle. The template explicitly operationalizes concepts like AI Impact Assessments, risk tolerance thresholds, governance gates, and system documentation, making it especially useful for organizations moving from ad-hoc AI use to a formal AI management system. It also clearly signals where customization is required, repeatedly emphasizing that the template must be adapted to organizational context, risk appetite, and applicable laws. Notably, the policy is positioned as a core requirement under ISO/IEC 42001, reinforcing its relevance for organizations preparing for audits or certifications.

Best for: Compliance leaders, AI governance professionals, legal teams, and organizations designing or formalizing a Responsible AI or ISO/IEC 42001-aligned AI management system.

AI Auditing Checklist for AI Auditing

What it is: A 2023 methodological checklist by Dr. Gemma Galdon Clavell, prepared under the EDPB Support Pool of Experts, outlining how to conduct end-to-end audits of AI systems with a focus on impacts and bias.  

Why it’s worth reading: This document doesn’t treat AI auditing as a narrow technical exercise. Instead, it proposes an end-to-end, socio-technical audit approach that examines AI systems across their full lifecycle: pre-processing, in-processing, and post-processing. The checklist is highly structured, covering model cards, system mapping, data governance, bias identification, verification and validation, and audit reporting. A particularly useful contribution is the detailed framework for identifying moments and sources of bias, going well beyond dataset imbalance to include deployment, automation, and organizational bias. The checklist also explicitly ties audit questions to GDPR provisions, making it easier to translate governance principles into operational checks.

Best for: AI governance leads, data protection professionals, auditors, and regulators who need a concrete, compliance-oriented framework for assessing AI systems in real-world contexts.

European Union Artificial Intelligence Act: a guide

What it is: A comprehensive 76-page guide published by Bird & Bird on 7 April 2025, explaining the structure, scope, and obligations of the EU Artificial Intelligence Act, including timelines for implementation.

Why it’s worth reading: The guide walks readers through the AI Act’s risk-based framework, clearly distinguishing between prohibited practices, high-risk AI systems, general-purpose AI models, and systems subject to transparency duties. It pays particular attention to who must comply across the AI value chain (providers, deployers, importers, distributors) and how obligations differ for each role. The document also unpacks practical issues such as regulatory sandboxes, governance and enforcement mechanisms, penalties, and the interaction between the AI Act and future measures like delegated acts, standards, and Commission guidelines. Rather than speculating, it consistently flags where further guidance is expected from the Commission, helping readers understand what is settled law and what is still evolving.

Best for: Legal, compliance, and policy professionals; in-house counsel; and AI governance leads who need a structured, reliable reference to understand how the EU AI Act applies in practice and what preparations are realistically required before most obligations apply from 2026.

AI Governance Readiness Checklist

What it is: A short, practitioner-focused AI Governance Readiness Checklist published by Cognitive View, outlining core steps organizations should take to understand, govern, and scale AI responsibly.  

Why it’s worth reading: This checklist walks through eight concrete areas of AI governance, starting with AI discovery (including “shadow AI” like informal use of ChatGPT or Copilot) and moving through data governance, model development, risk management, regulatory alignment, and ethics. Rather than theory, it emphasizes operational actions: inventorying AI systems, documenting data sources and models, running DPIAs or AIAs, setting risk thresholds, and maintaining audit-ready documentation. The document also explicitly references alignment with frameworks such as NIST AI RMF, ISO standards, OECD principles, and upcoming regulation like the EU AI Act. While brief, it provides a structured lens for quickly assessing governance maturity and identifying gaps that matter for compliance, security, and trust.

Best for: Legal, compliance, data, and AI leaders who need a fast, structured way to baseline AI governance readiness before deeper audits or regulatory preparation.

AI Impact Assessment The tool for a responsible AI project

What it is: A December 2024 (v2.0) guidance document developed by the Dutch Ministry of Infrastructure and Water Management and partner agencies, introducing the AI Impact Assessment (AIIA) as a structured tool to assess whether and how an AI system should be deployed responsibly.  

Why it’s worth reading: The document offers a concrete, step-by-step framework covering the full AI lifecycle — from defining system purpose and proportionality, through impact on fundamental rights and sustainability, to technical robustness, data governance, risk management, and accountability. It explicitly aligns the assessment with the EU AI Act’s risk-based approach, including detailed appendices on high-risk systems and generative AI. Notably, version 2.0 expands guidance on generative AI, highlighting limits around explainability, reproducibility, energy consumption, and legal compliance. Rather than abstract ethics, the AIIA translates responsibility into operational questions teams must answer before putting AI into production.

Best for: Public sector bodies, compliance teams, AI governance leads, and project owners who need a practical, auditable way to document AI risk, justify design choices, and demonstrate alignment with the EU AI Act and fundamental rights obligations.

EU AI Act Handbook May 2025

What it is: A comprehensive handbook by White & Case (May 2025) that provides pragmatic guidance to help businesses navigate the EU AI Act’s complex, evolving requirements.

Why it’s worth reading: It emphasizes practical compliance over legal theory, offering actionable guidance on ambiguous areas of the law. The authors tackle vague definitions and uncertain obligations head-on, reasoning by analogy from other EU regulations to suggest likely interpretations where the AI Act is unclear . If you’re struggling with questions like what counts as an “AI system” or how to handle general-purpose AI models under the Act, this handbook delivers clarity grounded in real-world experience .

Best for: Busy legal counsel, compliance officers, and business leaders who need a clear, no-nonsense map to prepare for AI Act compliance .

The ROI of AI Ethics: Profiting with Principles for the Future

What it is: A position paper by The Digital Economist (2025) examining how ethical AI practices can generate measurable business value alongside risk mitigation and compliance.  

Why it’s worth reading: This paper makes a clear, business-oriented case that AI ethics is not just about avoiding harm, but about improving ROI. It introduces the concept of “Ethical AI ROI,” which expands traditional financial metrics to include trust, brand value, talent retention, and long-term strategic flexibility. Using concrete comparisons between traditional ROI, AI ROI, GRC ROI, and ethical AI ROI (see tables in Sections 2–3), the authors show where conventional models fall short. Real-world case studies—including Mastercard’s AI governance approach and failures like IBM Watson for Oncology—illustrate both upside and downside. The paper also proposes an “Ethics Return Engine” as a future tool for quantifying ethical investments, while openly acknowledging current measurement gaps.

Best for: Executives, AI governance leads, risk and compliance professionals, and policymakers who need to justify AI ethics investments in financial and strategic terms, not just values language.

AI Security Risk Assessment

What it is: A Microsoft-authored whitepaper outlining a practical approach to assessing and managing security risks across the full AI system lifecycle, designed to align with existing frameworks like ISO 27001 and NIST.  

Why it’s worth reading: This document tackles a real gap many organizations face: knowing AI systems need protection, but lacking concrete guidance on how to do it. Based on a Microsoft survey of 28 organizations, it notes that most respondents lacked adequate tools to secure ML systems and were actively seeking direction. The paper walks through AI-specific risks step by step, from data collection and processing to model training, deployment, monitoring, and incident response. Instead of proposing a brand-new standard, it maps AI security controls onto familiar risk assessment practices, making it easier to integrate into existing security programs. The inclusion of severity, likelihood, and impact guidance helps teams prioritize controls based on real business risk, not theory.

Best for: Security leaders, AI governance professionals, and ML teams who need a structured, practical way to assess AI security posture without starting from scratch or abandoning established security frameworks.

AI Governance Framework for India 2025-26

What it is: A September 2025 whitepaper by the National Cyber and AI Center (NCAIC) presenting a comprehensive, risk-based AI governance framework tailored to India’s legal, regulatory, and societal context.  

Why it’s worth reading: The document goes beyond high-level principles and offers an end-to-end governance model covering the full AI lifecycle — from system inventory and risk classification to deployment, monitoring, and decommissioning. It aligns explicitly with India’s Data Protection and Digital Privacy Act, CERT-In Directions, and the IndiaAI Mission, while mapping controls to international standards like ISO/IEC 42001 and NIST AI RMF. Particularly useful are the concrete risk taxonomies (including prohibited and high-risk use cases), the emphasis on population-scale impact, and the detailed implementation roadmaps (100-day, 12-month, and 24-month plans). The framework also addresses uniquely Indian challenges, such as multilingual fairness testing, election integrity, and public sector accountability.

Best for: Policy makers, regulators, public sector leaders, and enterprise risk, legal, and AI governance teams operating in or with India who need a practical, standards-aligned blueprint for responsible AI deployment.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.