AI Governance Library

The AI Regulatory Capability Framework and Self-Assessment Tool

“An ability of individuals, organisations, and systems to perform AI regulatory functions effectively, efficiently, and sustainably across the lifecycle, to achieve defined strategic and policy objectives for AI regulation.”
The AI Regulatory Capability Framework and Self-Assessment Tool

⚡ Quick Summary

This document, developed by The Alan Turing Institute in collaboration with UK government stakeholders, introduces a structured framework to assess and build regulatory capability for AI. It is designed specifically for a decentralised, regulator-led model of AI governance. The framework translates abstract ideas of “capacity” into concrete regulatory activities, capability factors, and benchmarks for good practice across the full regulatory lifecycle.

At its core, the resource helps regulators answer a hard but practical question: are they actually equipped to regulate AI in a way that is effective, proportionate, and future-proof? By combining a conceptual framework with a detailed self-assessment tool, it enables regulators to identify capability gaps, justify funding or policy interventions, and coordinate more effectively across the regulatory system. This makes it one of the most operationally useful governance resources currently available for public authorities facing rapid AI adoption.

🧩 What’s Covered

The document is divided into two main parts: conceptual foundations and practical resources. Together, they create a full-stack approach to AI regulatory capability.

First, it defines what “AI regulatory capability” means. Drawing on UNDP capacity theory, the framework treats capability as more than skills or budgets. It includes legal mandates, institutional autonomy, leadership culture, technical infrastructure, research capacity, and system-level coordination. Capability is analysed at three levels: individual, organisational, and system-wide.

Second, the framework maps AI regulation across six stages of the regulatory lifecycle: agenda and objective setting; formulating rules, norms, and guidance; regulatory engagement and uptake; information gathering and compliance monitoring; responding to non-compliance; and evaluating and updating policy. Across these stages, it identifies 28 discrete regulatory activities that together describe what regulating AI actually involves in practice.

Third, it introduces six capability factors that determine whether regulators can perform those activities effectively: legal and administrative foundations; financial resources; infrastructure, tools, and technology; research, development, and intelligence; experience, skills, and expertise; and leadership, culture, and communication. These factors are translated into 17 concrete capability statements that define what “good practice” looks like.

Finally, the self-assessment tool operationalises the framework. Regulators can choose between three levels of assessment: a high-level summary, a lifecycle-stage assessment, or a deep dive into specific regulatory activities. Each uses a consistent five-point readiness scale and combines quantitative scoring with qualitative analysis of gaps, risks, prior interventions, and support needs. The result is a practical instrument for internal planning, cross-regulatory coordination, and engagement with government sponsors.

💡 Why it matters?

This framework directly addresses one of the most under-discussed problems in AI governance: regulators are expected to manage AI risks without a shared, realistic understanding of what capability is required to do so. By grounding AI governance in day-to-day regulatory work, the framework bridges the gap between high-level AI principles and operational enforcement reality.

It is particularly valuable in decentralised governance models, where coordination failures, uneven resourcing, and unclear mandates can undermine trust and effectiveness. The framework gives regulators a common language to discuss readiness, justify investment, and align expectations with policymakers. In practice, it can function as both a diagnostic tool and a strategic roadmap for becoming “AI-fit” as institutions, not just as policy authors.

❓ What’s Missing

The framework is intentionally non-prescriptive, which is a strength, but it also means it stops short of offering concrete benchmarks for specific AI risk categories such as foundation models or agentic systems. It also does not directly integrate external legal regimes like the EU AI Act, ISO/IEC 42001, or sector-specific safety standards, leaving that mapping to the user.

Additionally, while system-level dependencies are clearly acknowledged, the framework has limited guidance on resolving capability gaps that regulators cannot fix themselves, such as statutory limitations or fragmented oversight mandates. Users must translate assessment results into political or legislative action largely on their own.

👥 Best For

This resource is best suited for public regulators, supervisory authorities, and government departments responsible for AI oversight. It is particularly useful for senior leadership teams, AI policy units, enforcement functions, and cross-regulatory coordination bodies. It can also support treasury, audit, and sponsor departments assessing whether regulators are realistically equipped to meet AI governance expectations.

📄 Source Details

The AI Regulatory Capability Framework and Self-Assessment Tool, The Alan Turing Institute, 2025 

📝 Thanks to

Christopher Thomas and Richard Beddard, and the broader team at The Alan Turing Institute, with contributions from UK regulators and the Department for Science, Innovation and Technology under the Expert Exchange Programme.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.