AI Governance Controls Mega-map (Feb 2025)
This is one of the best open-source resources for operationalizing multi-standard AI governance. Kavanagh’s framework deserves a place on every AI compliance lead’s desk.
This is one of the best open-source resources for operationalizing multi-standard AI governance. Kavanagh’s framework deserves a place on every AI compliance lead’s desk.
A tool built for clarity, not complexity. The AI Risk Analysis Framework from MIT offers a structured, policy-relevant approach to thinking about AI risks. It’s designed for teams who need to assess potential harms without getting buried in technical noise.
A practical tool to support procurement teams, legal advisors, CISOs, and AI governance leads in evaluating vendors supplying AI-based systems or services.
That’s the question this report tackles—offering one of the most detailed attempts so far to break down “trustworthy AI” into 150 actionable properties across the AI lifecycle. The taxonomy isn’t just a list.
This report breaks down why that question isn’t so easy to answer when AI is built and deployed by many hands. It proposes a framework for assigning civil liability across the AI value chain.
Even when pretraining data is clean, large language models can still absorb—and amplify—political bias. This study maps how that happens and how it affects hate speech and misinformation detection.
The report lays out a three-pillar governance framework to help regulators handle generative AI: build on existing rules, shape inclusive and cross-sector practices now, and prepare for future disruptions through foresight, agility, and international cooperation.
This paper introduces the Legal-XAI taxonomy—an interdisciplinary framework to guide policymakers, lawyers, and developers in choosing the right kind of explanations for AI decisions.
NIST AI 100-4 is a 2024 report that outlines technical methods for improving transparency and reducing risks from synthetic content such as AI-generated images, videos, and text.
HUDERIA is a methodology adopted by the Council of Europe in November 2024 for assessing AI systems from the perspective of human rights, democracy, and the rule of law.
his JRC report outlines the competences and governance practices public organizations need to adopt AI effectively.
This governance framework helps organizations manage GenAI risks across five domains—strategy, compliance, operations, ethics, and accountability.
This CIPL paper offers 14 practical recommendations for applying data protection principles to generative AI systems.
Matthew da Mota argues for turning Canada’s CAN/DGSI 128 standard into the world’s first international standard for AI use in research institutions.
This DHS framework lays out voluntary roles and responsibilities for managing AI risks across U.S. critical infrastructure.
This techUK paper offers a grounded look at how UK organisations can translate the ethical principles outlined in the 2024 UK AI White Paper into real, operational practices.
This report by the Digital Policy Alert maps how 11 of the world’s most advanced AI regulatory frameworks align (or don’t) across over 70 regulatory requirements.
The AI Act sets out legal obligations for high-risk systems—but how do you show you’ve met them? This JRC report explains what European harmonised standards will need to deliver. It’s a must-read if you’re building for compliance in 2026 or advising on AI Act conformity.
This World Bank report maps out how countries are developing AI governance strategies, highlighting tools like soft law, hard law, and regulatory sandboxes.
This report explores historical compliance breakdowns in high-risk industries—like finance, health, and aviation—to extract patterns that can inform AI governance today.
This national security AI framework defines when and how U.S. government agencies can use AI in national security systems. It prohibits dangerous applications, sets minimum risk management standards for high-impact AI use, and mandates oversight, training, and transparency.
The paper studies how generative AI reshapes individual work processes. Using GitHub Copilot’s rollout as a natural experiment, the authors find that developers with AI access shift time away from collaborative management tasks toward solo coding. This effect is strongest for lower-skill developers.
The FDA’s draft guidance outlines what manufacturers of AI-enabled medical devices should include in their premarket submissions.
This study digs into how AI ethics actually gets implemented inside tech companies. It shows how “ethics entrepreneurs” — employees trying to integrate ethical practices — face structural hurdles, lack of leadership support, and personal risks. Despite formal policies, outcomes rarely match.
Curated Library of AI Governance Resources