The Anatomy of AI Rules A systematic comparative analysis of AI rules across the globe
This report by the Digital Policy Alert maps how 11 of the world’s most advanced AI regulatory frameworks align (or don’t) across over 70 regulatory requirements.
This report by the Digital Policy Alert maps how 11 of the world’s most advanced AI regulatory frameworks align (or don’t) across over 70 regulatory requirements.
The AI Act sets out legal obligations for high-risk systems—but how do you show you’ve met them? This JRC report explains what European harmonised standards will need to deliver. It’s a must-read if you’re building for compliance in 2026 or advising on AI Act conformity.
This World Bank report maps out how countries are developing AI governance strategies, highlighting tools like soft law, hard law, and regulatory sandboxes.
This report explores historical compliance breakdowns in high-risk industries—like finance, health, and aviation—to extract patterns that can inform AI governance today.
This national security AI framework defines when and how U.S. government agencies can use AI in national security systems. It prohibits dangerous applications, sets minimum risk management standards for high-impact AI use, and mandates oversight, training, and transparency.
The paper studies how generative AI reshapes individual work processes. Using GitHub Copilot’s rollout as a natural experiment, the authors find that developers with AI access shift time away from collaborative management tasks toward solo coding. This effect is strongest for lower-skill developers.
The FDA’s draft guidance outlines what manufacturers of AI-enabled medical devices should include in their premarket submissions.
This study digs into how AI ethics actually gets implemented inside tech companies. It shows how “ethics entrepreneurs” — employees trying to integrate ethical practices — face structural hurdles, lack of leadership support, and personal risks. Despite formal policies, outcomes rarely match.
Fundamental Rights Impact Assessments (FRIAs) are required for certain high-risk AI systems under Article 27 of the AI Act. They help ensure that AI deployment doesn’t violate key EU rights.
The AIIA is a comprehensive tool for building accountability into AI projects from day one. Developed within the Dutch government, it combines ethical, legal, technical, and organizational criteria into a structured format.
This expanded guide builds on the 2024 ASEAN AI Governance and Ethics framework, zooming in on the specific risks and policy needs surrounding generative AI (Gen AI).
This guide breaks down how internal auditors should prepare for the AI Act, which came into force in August 2024.
AI Tools in Society- Impacts on Cognitive Offloading and the Future of Critical ThinkingAI Tools in Society- Impacts on Cognitive
This report by CDT’s AI Governance Lab maps out the wide array of methods used to evaluate AI systems—from impact assessments and audits to red-teaming and formal assurance.
This CSET report unpacks how China is hedging its bets on general artificial intelligence (GAI) by pursuing a mix of technical strategies—unlike the West’s heavy focus on large language models (LLMs).
Getting international data transfers right is one of the toughest parts of GDPR compliance. This practical guide from CNIL lays out how to run a Transfer Impact Assessment (TIA) without guesswork.
Regulatory sandboxes aren’t just buzzwords—they’re fast becoming one of the EU’s go-to tools for managing fast-moving AI and cybersecurity risks. This white paper brings together legal, technical, and policy perspectives to offer a grounded roadmap for building and using sandboxes the right way.
This “living repository” shows how companies are starting to take that seriously—with real, varied, and often creative approaches to staff training.
Frontier AI is powerful—but how powerful is too powerful? This Berkeley-led paper proposes a framework for defining and managing intolerable risks, pushing governments and industry to stop waiting for disaster and start drawing lines. It’s a toolkit for acting before things go wrong.
This 2025 paper by Iren, Noldus, and Brouwer offers a much-needed guide to how the EU’s AI Act and the Commission’s new guidelines apply to the emotion recognition field—one of the most contentious areas of affective computing.
This white paper from the UAE’s AI Office captures a rare, high-level dialogue on responsible AI, convened at the World Governments Summit.
This landmark report brings together 96 global experts to create the first shared scientific baseline on general-purpose AI risks and safety. It doesn’t recommend policies—it equips governments, researchers, and regulators with what’s known (and what’s not) so far.
This U.S. Copyright Office report maps out the toughest economic questions about AI and copyright, without pretending to have the answers.
Accountability starts with visibility—especially when AI is doing the work.
Curated Library of AI Governance Resources