AI Governance Library

AI Maturity Framework – A self-positioning guide for public administrations

The AI Maturity Framework supports this need by offering a comprehensive, systematic approach to assessing an organisation’s readiness for AI.
AI Maturity Framework – A self-positioning guide for public administrations

⚡ Quick Summary

This UNESCO-developed AI Maturity Framework is a structured self-assessment tool designed primarily for public administrations to evaluate their readiness for AI adoption. It introduces a six-pillar model covering strategy, people, technology, operations, governance, and data, each broken into granular capability areas with four maturity levels (Basic to Advanced). The framework is not prescriptive—it doesn’t define a “target state”—but instead enables organizations to understand their current position, identify gaps, and define their own roadmap. Its strength lies in combining governance, technical, and organizational dimensions into a single diagnostic model, making it useful for aligning stakeholders across functions.

🧩 What’s Covered

The framework is built around six core pillars that together define AI maturity across an organization. These include Strategy & Value, People & Culture, Technology & Infrastructure, AI Operations & Ecosystem, AI Governance, Ethics & Risk, and Data. Each pillar is further divided into detailed categories such as AI vision, talent development, MLOps, compliance, and data readiness, creating a comprehensive map of AI capabilities.

Each category is assessed across four maturity levels: Basic (ad hoc, reactive), Ready (defined but limited), Dynamic (integrated and measured), and Advanced (optimized and continuously improved). This progression reflects a shift from experimentation to institutionalization of AI practices.

A central component is the Self-Positioning Guide, which walks organizations through a structured evaluation process. This includes reviewing maturity descriptions, providing evidence, identifying strengths and gaps, and defining desired future states. The process culminates in both pillar-level and organization-wide maturity assessments, supported by summary sheets and action planning sections.

Notably, the framework integrates governance and ethics as a full pillar, including areas such as risk management, transparency, and compliance with regulations like GDPR or the EU AI Act. It also emphasizes operational maturity through MLOps, monitoring, and integration—areas often missing in high-level governance frameworks.

Overall, the document functions as both a diagnostic tool and a planning instrument, enabling organizations to translate abstract AI ambitions into structured capability development paths.

💡 Why it matters?

This framework fills a critical gap between high-level AI principles and real organizational implementation. It operationalizes “responsible AI” into concrete capabilities and maturity levels, making governance measurable and actionable. For public sector entities—often struggling with fragmentation—it provides a shared language across legal, technical, and strategic teams.

Importantly, it also aligns well with emerging regulatory expectations (e.g., AI Act readiness), even though it is not explicitly a compliance tool. For AI governance professionals, it offers a practical bridge between policy frameworks and execution, especially in areas like risk management, ethics, and lifecycle governance.

❓ What’s Missing

The framework is intentionally non-prescriptive, which is both a strength and a limitation. It does not provide benchmarks, scoring methodologies, or guidance on what “good” looks like across sectors.

There is also limited guidance on prioritization—organizations are left to decide which gaps matter most without structured risk-based weighting.

From an AI governance perspective, the framework could go further in mapping maturity levels directly to regulatory obligations (e.g., AI Act risk categories) or standards (ISO 42001, NIST AI RMF).

Finally, while GenAI is implicitly covered, there is no explicit treatment of foundation models, agentic systems, or emerging risks tied to them.

👥 Best For

Public sector organizations starting their AI journey

AI governance and compliance teams building internal frameworks

Consultants conducting AI readiness or maturity assessments

Organizations needing a structured internal alignment tool across departments

📄 Source Details

UNESCO (2025), developed with Stratejai under the “AI-Ready Flemish Public Administration” initiative, co-funded by the European Commission

📝 Thanks to

UNESCO, Stratejai (Stefano Sedola, Andrea Pescino, Enrico Sartor), and the AI Expertise Centre of Digital Flanders

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.