EU AI Act Handbook May 2025
This handbook is designed to help organizations implement the EU AI Act, offering practical compliance checklists, simplified risk categorization tools, and implementation guides aligned with regulatory articles.
This handbook is designed to help organizations implement the EU AI Act, offering practical compliance checklists, simplified risk categorization tools, and implementation guides aligned with regulatory articles.
This UNDP flagship report explores how AI is reshaping core government functions — from social protection to justice — and what it takes to ensure this transformation is equitable, ethical, and effective.
An updated glossary from the IAPP defining key technical and policy terms essential to the evolving field of AI governance, designed for professionals across legal, technical, and regulatory domains.
This Microsoft white paper draws on eight safety-critical domains to develop insights for advancing the evaluation and testing ecosystem for AI, especially generative AI.
Autonomy is a double-edged sword for AI agents, unlocking transformative potential while raising critical risks. This paper introduces five levels of autonomy based on user roles: operator, collaborator, consultant, approver, and observer.
Internal AI systems often have dual-use capabilities significantly ahead of the public frontier … which may soon significantly enhance threat actors’ ability to cause harm.
This evidence scan identifies 11 recent frameworks that connect AI safety and traditional risk management, offering insights into how these historically separate fields can inform each other.
A certification framework to assess and assure the responsible governance of AAA (Artificial Intelligence, Algorithmic, Autonomous) systems in financial institutions, aligned with BASEL III, SR11-7, and modern AI regulations.
A practical and research-backed guide offering 10 actionable plays to help product managers and business leaders use generative AI responsibly—grounded in governance, risk mitigation, and business alignment.
China’s AI safety governance is evolving from ethics-focused soft law to increasingly concrete policies on frontier AI risks, though with limited regulatory enforcement mechanisms.
This RAND report outlines policy and technical recommendations for responding to advanced AI systems that act in uncontrollable, harmful ways—so-called Loss of Control (LOC) incidents—emphasizing detection, escalation, and containment.
If a system fails even one, it’s structurally out of compliance and regulators can prove it – in minutes.
An analysis of nearly 460,000 AI model cards shows developers overwhelmingly report technical risks—while real-world harms arise more from misuse and misinformation.
Despite $30–40B invested in GenAI, 95% of businesses see no ROI. The divide is not in adoption—but in impact. This report maps the real reasons behind failure and what separates top performers from stalled efforts.
“88% of agentic AI early adopters are now seeing a positive ROI on gen AI
This roadmap sets out our ambitions for the third-party assurance market in the UK and the immediate actions that government will take to support this emerging sector.
This RAND working paper proposes a framework for identifying tasks that even Artificial General Intelligence (AGI) would be fundamentally unable to perform, grounded in constraints from physics, information theory, and computational complexity.
✒️ Foreword While the U.S. frames AI as a tool for innovation, economic growth, and military advantage, China is building
An interdisciplinary legal-philosophical textbook covering AI’s ethical foundations, regulatory implications, and governance challenges through technical, philosophical, and legal lenses.
A voluntary guide tailored to New Zealand businesses, showing how existing governance, legal, and operational practices can anchor AI risk management—without requiring a ground-up rebuild.
Organizations with a Chief AI Officer (CAIO) see 10% higher ROI on AI—and up to 36% more when using centralized operating models. But only 1 in 4 organizations have a CAIO today.
A detailed, scored assessment of how seven frontier AI developers approach safety—tracking real practices, not just promises, across 33 indicators in six domains.
Security-by-default for AI: practical, lifecycle-wide guidance to help providers build and operate AI systems that resist misuse, protect data, and remain reliable—even under attack.
The AGILE Index ranks countries on their AI governance readiness—not by intent, but by measurable capability, institutional maturity, and actual regulatory power.
Curated Library of AI Governance Resources