AI Governance Library

AI Governance Library

Curated Library of AI Governance Resources
AI Governance Library
20 Cognitive Biases everyone should know

20 Cognitive Biases everyone should know

A compact, practical guide to understanding the most influential cognitive biases in everyday thinking, decision-making, and AI design—plus a bonus chapter on algorithmic bias. A must-read for anyone working at the intersection of technology, governance, and ethics.

Making AI Self-Regulation Work

Making AI Self-Regulation Work

Amlan Mohanty’s report, Making AI Self-Regulation Work, offers a comprehensive framework for deploying self-regulation as a foundational piece of India’s AI governance.

Open Source Technology in the Age of AI

Open Source Technology in the Age of AI

Open source AI is no longer fringe—it’s becoming essential. Based on a global survey of over 700 tech leaders, this report shows how open source AI is reshaping tech stacks, boosting developer satisfaction, and challenging proprietary dominance.

Multi-Agentic system Threat Modelling Guide

Multi-Agentic system Threat Modelling Guide

A highly structured guide showing how to apply OWASP’s Agentic AI Threat Taxonomy to real-world multi-agent systems (MAS), introducing the MAESTRO framework to surface layered vulnerabilities and new attack paths unique to agent-to-agent coordination environments.

AI Risk Assessment Template (TrustArc, 2025)

AI Risk Assessment Template (TrustArc, 2025)

The AI Risk Assessment Template provides a structured, highly practical checklist for evaluating AI system risks across development, deployment, and operation phases. It aligns with NIST AI RMF and EU AI Act requirements, aiming to boost trustworthy AI practices.

AI Agents Governance – A Field Guide

AI Agents Governance – A Field Guide

This guide explores how to govern autonomous AI agents—systems capable of planning and acting with minimal instruction. It presents a structured approach to agent risks and interventions, pushing the conversation beyond foundation models toward emergent systems.

AI Ethics and Governance in Practice

AI Ethics and Governance in Practice

This workbook is a facilitator’s guide to delivering AI ethics training across public institutions. It covers AI fundamentals, public sector use cases, and governance models—paired with activities grounded in UK government experience and policy frameworks.

Understanding Responsibilities in Al Practices

Understanding Responsibilities in Al Practices

This guidance from New South Wales outlines role-specific responsibilities for implementing responsible AI. It supports public agencies in assigning accountability using ISO-aligned frameworks and practical RACI structures. A useful anchor for everyday governance.

A blueprint for modern digital government

A blueprint for modern digital government

This UK government strategy outlines a six-point digital reform plan focused on service redesign, AI integration, shared infrastructure, and leadership reform. It introduces a new digital centre of government and pushes for transparency, efficiency, and public trust.

European Union Artificial Intelligence Act: a guide

European Union Artificial Intelligence Act: a guide

The Bird & Bird guide to the EU AI Act offers a deep dive into the Act’s legal obligations, scope, governance model, and technical standards. It walks readers through implementation timelines, roles across the AI value chain, and penalties for non-compliance.

AI Privacy Risks & Mitigations – Large Language Models

AI Privacy Risks & Mitigations – Large Language Models

This report, produced under the EDPB’s Support Pool of Experts (SPE) programme, offers structured guidance on managing privacy risks in LLM systems. It lays out risk identification, evaluation, and control strategies tailored to GDPR and AI Act obligations, supporting both developers and deployers.

Welcome to the AIGL

Welcome to the AIGL

This is issue #1 of the AIGL newsletter. In the first 24 hours, 382 of you subscribed. In this issue: • A personal note on why this project exists • A review of MIT’s AI Risk Framework—one of the cleanest tools we’ve seen so far • And two sharp links from inside China’s evolving AI landscape

Choosing the Right Controls for AI Risks

Choosing the Right Controls for AI Risks

A visual guide and explanatory article by James Kavanagh, published via The Company Ethos (April 2025), that maps major AI risks—like bias, hallucinations, and adversarial attacks—to practical prevention, detection, and response controls across design-time and run-time phases.

MIT AI Risk Analysis Framework (AI-RAF)

MIT AI Risk Analysis Framework (AI-RAF)

A tool built for clarity, not complexity. The AI Risk Analysis Framework from MIT offers a structured, policy-relevant approach to thinking about AI risks. It’s designed for teams who need to assess potential harms without getting buried in technical noise.

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.