AI Governance Library

AI Governance International Evaluation Index (AGILE Index) – 2025 Edition

The AGILE Index ranks countries on their AI governance readiness—not by intent, but by measurable capability, institutional maturity, and actual regulatory power.
AI Governance International Evaluation Index (AGILE Index) – 2025 Edition

⚡ Quick Summary

The AGILE Index 2025 is the most detailed global benchmarking tool for AI governance to date. Created by GPAI and academic partners, it scores 57 countries across five pillars: Legal Infrastructure, Technical Capability, Institutional Capacity, International Engagement, and Ethical Alignment. Unlike earlier “readiness” indices, AGILE focuses not just on strategies or ambitions but on tangible mechanisms—regulators, standards adoption, oversight bodies, and implementation track records. It also includes a novel “dynamic governance” component, evaluating how quickly states adapt to emerging AI risks like generative models or agentic systems. The result: a comparative snapshot of global AI governance systems, grounded in what governments actually do, not just what they say.

🧩 What’s Covered

The Index evaluates national AI governance along five weighted pillars:

  1. Legal Infrastructure (30%)
    • Laws specific to AI, data protection, algorithmic accountability
    • Enforcement mechanisms and liability structures
    • Risk-based frameworks, including alignment with the EU AI Act or similar legislation
    • Existence and clarity of AI-specific red lines (e.g., biometric surveillance bans)
  2. Technical Capability (20%)
    • Standard-setting participation (ISO, IEEE)
    • Domestic AI evaluation/testing labs
    • AI red-teaming programs and secure sandboxes
    • Funding for technical safety research
  3. Institutional Capacity (20%)
    • Dedicated AI regulators or cross-sectoral authorities
    • Independence, funding, and staff AI literacy
    • Existence of audit and certification schemes
    • Coordination between ministries, regulators, and standards bodies
  4. International Engagement (15%)
    • Participation in GPAI, OECD, Council of Europe CAHAI, G7 Hiroshima Process
    • Treaty commitments (e.g. UNESCO AI Ethics)
    • Contribution to cross-border model governance or incident reporting
  5. Ethical Alignment (15%)
    • Integration of rights-based frameworks into policy
    • Civil society involvement in AI oversight
    • Alignment with global fairness and non-discrimination benchmarks
    • Transparency around public-sector AI use

Scoring & Ranking

  • Countries are ranked into four tiers: LeadingProgressingDevelopingAt Risk
  • Top performers in 2025: Canada, Germany, South Korea, UK, France
  • The index includes regional deep-dives for Latin America, Sub-Saharan Africa, and ASEAN
  • Each country profile highlights gaps, priority actions, and case examples

Innovative Feature:

  • A “Governance Responsiveness Score” tracks how quickly and effectively a country reacts to new AI risks (e.g. regulating foundation models, agent autonomy, compute access)

💡 Why it matters?

Governments are under pressure to regulate AI—but most indices just measure policy papers, not enforcement or institutional reality. The AGILE Index shifts the focus to functional governance. It’s the first index that directly compares states based on capabilities to shape AI outcomes. For funders, civil society, and multilateral orgs, it flags where to invest, partner, or pressure. For national regulators, it provides a mirror: what’s working, what’s lagging, and where alignment with global frameworks stands.

❓ What’s Missing

  • Non-State Actors: No scoring of corporate or academic ecosystem governance roles
  • Downstream Impact Measures: Lacks indicators tied to societal outcomes or harm reduction
  • Intra-Country Variation: National-level scoring masks regional disparities in federal systems

👥 Best For

  • Policy teams benchmarking national governance
  • Donors & multilaterals allocating capacity-building funds
  • Researchers tracking global regulatory convergence
  • Advocacy groups targeting priority countries for AI accountability
  • Ministries designing next-gen governance institutions

📄 Source Details

Title: AI Governance International Evaluation Index (AGILE Index), 2025 Edition

Authors: GPAI Responsible AI Working Group, Technical University of Munich, Oxford Internet Institute

Date: July 2025

Length: 68 pages

Format: Index report with country scorecards, regional snapshots, and pillar analysis

Website: gpai.ai

📝 Thanks to

The GPAI Responsible AI WGTUM, and the Oxford Internet Institute for delivering the most credible, evidence-based, and forward-looking AI governance benchmarking resource yet. Special thanks to the in-country policy experts and peer reviewers who anchored the scores in lived policy reality.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.