AI Governance Library

AI Governance Library Newsletter #10: Requirements, Controls, and Everything Else They Forgot

A checklist is not a governance framework. This issue explores the silent compliance killers: false confidence, shallow controls, and audit failure.
AI Governance Library Newsletter #10: Requirements, Controls, and Everything Else They Forgot
Photo by Ryunosuke Kikuno / Unsplash

✒️ Foreword


It’s all fine and dandy when a shiny new AI regulation drops, or a fresh standard like ISO 42001 makes headlines. Cue the LinkedIn posts, webinars, and “strategic roadmaps.” But then someone at your org asks the real question: “Cool, but… what do we actually have to do?” 

That’s where things get messy—because too often, teams confuse requirements (the “what”) with controls (the “how”), and suddenly everyone’s arguing over whether a policy counts as compliance.

Requirements are not Controls

Just because you’ve copied a clause from ISO into your policy document doesn’t mean you’ve implemented a control. Requirements are not controls. A requirement is an expectation—like “ensure human oversight” or “manage AI-related risks.” It’s a destination, not a roadmap. Controls are the real-world actions, tools, and processes you put in place to meet that expectation. If your policy says “we conduct regular fairness assessments,” but no one actually does them—or worse, no one knows how—then congratulations: you’ve got a requirement masquerading as a control.

Requirements = What Must Be Achieved

Requirements in AI governance are expectationsobligations, or principles that must be met. They often come from laws, standards, or policies that set the target state—without prescribing how to get there.

Let’s look at some examples:

  • ISO/IEC 42001:2023 (AI Management Systems) defines requirements for establishing, implementing, and improving an AI governance system. Clause 6 requires organizations to identify AI-specific risks and opportunities—but doesn’t tell you how to do that.
  • NIST AI Risk Management Framework (AI RMF) defines Governance Functions like “MAP” and “MANAGE,” and outlines characteristics such as validityreliabilityrobustness, and accountability. These are high-level requirements that organizations must translate into operational practices.
  • The EU AI Act defines “high-risk” AI systems and sets obligations such as risk management (Art. 9), data governance (Art. 10), transparency (Art. 13), and human oversight (Art. 14). Again—these are the what.
  • The OECD AI Principles establish the need for AI to be inclusive, transparent, robust, and accountable—but intentionally leave implementation open to interpretation.

In each case, the frameworks describe what responsible AI should achieve, not how to get there.

Controls = How You Do It

Controls are the specific technical, procedural, or organizational mechanisms you implement to fulfill those requirements.

  • requirement might state: “Ensure human oversight of automated decisions” (EU AI Act, Art. 14).
  • control might be: “All AI decisions above a risk threshold must be reviewed and approved by a designated human reviewer before deployment.”

Controls should be traceable to a specific requirement and should be risk-adjusted. Some examples:

Control Type

AI Governance Example

Technical (Preventive)

Enforcing explainability thresholds in LLM outputs before deployment

Administrative

Policy requiring AI model documentation using a model card or datasheet

Detective

Monitoring model drift logs or fairness audits over time

Corrective

A kill-switch protocol to disable deployed models when anomalies are detected

Physical

Secured access to on-prem compute infrastructure training sensitive models

Why the Confusion Matters

When organizations mix up requirements and controls, things don’t just get messy—they get expensiveineffective, and risky. Here’s what typically goes wrong:

  • Controls are deployed without clear objectives. You end up implementing flashy tools or elaborate processes with no clear link to a specific requirement. It feels productive, but it’s just noise. You’re spending budget, creating documentation, and burning team time—without reducing risk in a meaningful way. That’s governance theatre.
  • Requirements are assumed to be met because “we have a policy.” This is the classic checkbox trap. Someone writes, “We ensure AI systems are explainable” in a PDF, uploads it to the compliance folder, and calls it a day. But unless there’s a repeatable, evidence-backed process—say, using SHAP values in high-stakes models or performing quarterly explanation quality reviews—you haven’t implemented a control. You’ve published an aspiration.
  • Audits and assurance efforts fall apart. When internal or external auditors ask, “How do you meet requirement X?” and your answer is, “We have a policy,” that’s not evidence—it’s deflection. Controls must be traceable to specific requirements, and they need to be demonstrable in action. If no one on your team can explain which control maps to which obligation, or how it’s monitored, you don’t have compliance—you have confusion.

And in AI governance, this confusion is multiplied.

Unlike traditional IT compliance, most AI-related requirements are principle-based (e.g., fairness, robustness, transparency) and context-sensitive. That means the same requirement can imply different controls, depending on the use case, the risk profile, and the stakeholders involved.

For example, the requirement to ensure “fairness” in AI might mean:

  • Group-level outcome parity for a public-sector credit scoring tool
  • Explainability-focused documentation for a chatbot
  • Bias impact assessments for a hiring algorithm

Slap the same one-size-fits-all fairness control on all three, and you risk either doing too much (wasting resources) or doing too little (missing the real risk). Worse, you might build a false sense of security—thinking you’re “compliant” when you’re actually exposed.

In short: confusing requirements with controls leads to wasted effort, missed obligations, and failed audits. And in AI governance—where the stakes include legal liability, reputational damage, and real-world harm—it’s a mistake organizations can’t afford to keep making.

Bottom Line

If you’re building out your AI governance stack—whether aligned with ISO 42001, NIST RMF, or preparing for the EU AI Act—the path is clear:

  • Start with the requirement: Understand what the framework or regulation is asking.
  • Map the control: Document how your organization achieves that goal in practice.
  • Ensure traceability: Be able to show auditors and stakeholders how each control connects back to its original objective.

Compliance is not a checklist. It’s a chain of logic—from need, to action, to evidence.

And that chain breaks the moment you mistake the “what” for the “how.”

—Jakub
Curator, AIGL 📚

☀️Spotlight Resources

AI Governance Framework for India 2025–26, published by the National Cyber and AI Center (NCAIC)

🔍 Quick Summary

A comprehensive AI governance blueprint tailored to India’s regulatory, societal, and technological landscape. The framework offers detailed risk classifications, lifecycle controls, and roadmaps for government, enterprises, and regulators—anchored in constitutional values and aligned with global standards.

📘 What’s Covered

The framework introduces a risk-based, lifecycle-oriented governance model adapted to India’s AI context. Key elements include:

  • Risk Taxonomy: Use cases are categorized as prohibited, high, medium, or low risk, with examples relevant to Indian contexts (e.g., emotion inference banned for employment/credit decisions) .
  • Lifecycle Controls: Spanning data governance, model development, pre-deployment evaluation, deployment, monitoring, and decommissioning, each stage includes technical safeguards like provenance, fairness metrics, rollback triggers, and deletion verification .
  • Assurance Mechanisms: A certification framework anchored in ISO/IEC 42001NIST AI RMF, and EU AI Act principles, supporting third-party audits, technical file maintenance, and transparency reporting .
  • Organizational Roles: Establishes Chief AI Risk Officers (CARO), AI Risk & Ethics Committees (AIREC), and delineates cross-functional governance responsibilities .
  • Sector-Specific Blueprints: Custom governance paths for banking, healthcare, telecom, manufacturing, and public sector, with case studies on payment fraud and diagnostic systems .
  • Implementation Roadmaps: Structured 100-day, 12-month, and 24-month rollouts, complete with templates like AI Privacy Impact Assessments (AIPIA), Model Cards, and Evaluation Harnesses .
  • Inclusion and Innovation: Strong emphasis on linguistic diversitysustainabilityindigenous innovation, and digital inclusion, ensuring AI systems are equitable across India’s population .

💡 Why It Matters

This is arguably the most comprehensive national AI governance framework in the Global South to date. Its design is rooted in India’s constitutional values, but its architecture is globally interoperable—bridging local realities with international governance standards.

For regulators, it operationalizes DPDP Act obligations and CERT-In directives into AI-specific workflows. For industry, it sets clear governance benchmarks tied to certification pathways (e.g., ISO 42001), de-risking cross-border AI deployments.

Crucially, it tackles population-scale AI risks (e.g., biometric misuse, digital payment failures, education bias) with precision controls and response protocols—addressing democratic threats like deepfakes and algorithmic manipulation head-on.

Its emphasis on public sector AI (via transparency mandates, sandboxing, appeal rights, and election safety measures) sets a global precedent for citizen-centered AI deployment.

🧩 What’s Missing

Despite its technical depth, enforceability remains ambiguous. While it aligns with India’s data and cyber laws, the framework lacks clarity on how oversight will be enacted across states and sectors—or what the role of courts, regulators, and citizens will be in challenging or reviewing AI system impacts.

Further, while it integrates global standards, differences in legal traditions (e.g., absence of an EU-style AI Act or binding fundamental rights protections) could limit alignment in practice. There’s also limited guidance for SMEs and startups, which may struggle with the cost and complexity of implementation.

Additionally, some key governance elements like public participation in high-risk system approvalaccess to model documentation for affected individuals, and cross-border AI impact assessments are only implied, not fully developed.

AI Governance: A Framework for Responsible and Compliant Artificial Intelligence

🔍 Quick Summary

A practical, legally grounded guide for organizations seeking to align AI deployment with the EU AI Act and other regulatory frameworks. Designed for legal, compliance, and operational teams, it bridges law and implementation with clear actions and Microsoft case studies.

📘 What’s Covered

This 31-page white paper offers a comprehensive roadmap for responsible AI governance, with a strong focus on compliance with the EU AI Act and related legal frameworks such as GDPR, DORA, and the CER Directive.

Key highlights include:

  • Risk-based AI classification under the AI Act: prohibited, high-risk, limited-risk, and minimal-risk systems, with clear explanations and sector examples (e.g., biometric systems, credit scoring tools) .
  • Detailed breakdown of roles and responsibilities: providers, deployers, importers, and manufacturers, with a focus on deployers as the most common role for organizations .
  • Coverage of GPAI models, systemic risk thresholds, and the new Code of Practice for compliance with Article 56 .
  • Analysis of non-legal risks: ethical, environmental (e.g., energy consumption of LLMs), operational, reputational, and privacy-related .
  • Four pillars of governance: transparency, AI literacy, security & robustness, and human oversight—each paired with actionable steps, Microsoft examples, and risk mitigation goals .
  • Best practices and implementation tips: from appointing AI Champions to internal training, policy drafting, and procurement strategies .
  • Challenges and mitigation strategies for resistance, innovation vs. control, tooling gaps, and legal fragmentation .
  • A robust section on compliance tips, including risk classification, transparency duties, testing protocols, and documentation for high-risk AI .

Throughout, the guide is enriched with practical tools, compliance checklists, Microsoft resources (Trust Center, Copilot guidance), and sector-specific scenarios.

💡 Why It Matters

This white paper stands out by translating the legal theory of AI governance into organizational practice. It doesn’t just describe regulatory obligations—it shows how to meet them. The emphasis on cross-functional collaboration, with shared responsibility across legal, IT, and operations, reflects the real-world complexity of AI compliance.

Its inclusion of Microsoft’s operational practices, including risk assessments and Responsible AI Standards, gives readers a blueprint that scales across organization sizes. Particularly useful are the guidance on deployers’ responsibilities (often overlooked in legal literature) and the breakdown of personal data risks at input, output, and user levels, connecting AI governance directly to GDPR.

In a fast-evolving regulatory landscape, the paper offers timely, jurisdiction-specific insight for Polish and EU organizations, backed by references to the Polish draft AI law and EDPB opinions. For practitioners navigating AI Act readiness, this is more than a white paper—it’s a tactical manual.

🧩 What’s Missing

While the guide is well-written and practical, it would benefit from:

  • Visual tools: A risk matrix, RACI charts for deployer roles, or a lifecycle compliance checklist would improve usability.
  • Case examples beyond Microsoft: The focus on Microsoft is informative but narrow. Broader industry references (e.g., healthcare, finance, government) would enrich applicability.
  • Deeper sectoral mapping: Although sectoral obligations are mentioned, the guide does not delve into how AI Act overlaps with DORA, MDR, or PSD2 in practice.
  • Global comparison: Given the extraterritorial scope of the AI Act, a side-by-side view with frameworks like NIST RMF or OECD principles would help global organizations.
  • Templates or appendices: Sample policies, DPIA checklists, or transparency notices would make the guide even more actionable.

OWASP AI Maturity Assessment (AIMA) – Version 1.0, August 2025

🔍 Quick Summary

The OWASP AI Maturity Assessment (AIMA) provides a comprehensive, open-source framework to evaluate and improve the governance, security, and trustworthiness of AI systems. It’s a much-needed extension of OWASP SAMM tailored for AI, bridging high-level Responsible AI principles with concrete engineering practices.

📘 What’s Covered

The AIMA framework introduces eight assessment domains across the AI lifecycle:

  1. Responsible AI – Fairness, transparency, and societal impact
  2. Governance – Strategy, compliance, and role-specific education
  3. Data Management – Quality, integrity, and data governance
  4. Privacy – Minimization, Privacy by Design, and user control
  5. Design – Threat modeling, security architecture, and requirements
  6. Implementation – Secure build, deployment, and defect management
  7. Verification – Security testing, requirement traceability, and architecture reviews
  8. Operations – Incident, event, and operational management

Each domain contains two maturity streams: Create & Promote (Stream A) and Measure & Improve (Stream B), with three levels of maturity. Worksheets with yes/no criteria facilitate both lightweight and detailed assessments .

AIMA is aligned with standards like the EU AI Act, NIST AI RMF, and ISO 42001, and builds on sister OWASP projects such as:

  • OWASP Top 10 for LLMs
  • OWASP AI Security & Privacy Guide
  • OWASP ML Security Top 10
  • OWASP AI Exchange 

It is intended for CISOs, AI/ML engineers, legal/risk teams, product leaders, and auditors, offering a role-aware, measurable approach to AI assurance.

💡 Why It Matters

AIMA directly addresses what many governance tools miss: the operationalization of Responsible AI. Where other maturity models stop at principles, AIMA goes further—connecting fairness or transparency goals with repeatable actions, ownership structures, metrics, and continuous improvement pathways .

This makes it uniquely suitable for:

  • Building AI governance programs from scratch
  • Mapping compliance readiness (e.g., for the EU AI Act)
  • Tracking incremental progress in real AI/ML teams
  • Aligning cross-functional teams around shared security and ethical goals

Its two-stream architecture is particularly elegant—allowing organizations to grow both capability and accountability in parallel. AIMA also avoids the trap of vendor lock-in. As a community-driven, open-source tool, it remains adaptable and practical, even for smaller orgs.

🧩 What’s Missing

Despite its strength, AIMA still leaves some questions open:

  • Industry-specific adaptation is not yet detailed—e.g., healthcare, finance, or defense may need deeper domain overlays.
  • Scoring granularity (e.g., what constitutes “Yes” for partial adoption) could benefit from more examples or thresholds.
  • There’s no built-in risk prioritization model—every practice is treated equally, though some (like data governance or deployment) may carry more immediate risk.
  • It lacks tooling guidance—which OSS or commercial tools best support each maturity step (e.g., for drift detection or fairness auditing)?
  • Integration with product development frameworks (like MLOps) could make the model even more actionable in engineering pipelines.

🌙 After Hours

Before there were data standards, there were stew standards.

🥣 The Oldest Recipes Ever Written

Researchers have translated the oldest known culinary recipes — written in cuneiform on clay tablets over 3,800 years ago in ancient Babylonia. We’re talking actual instructions for meat stews, broth preparations, and grain-based meals. One even includes directions for a “Royal Dish” that sounds suspiciously close to a very early risotto.

Ingredients, Process, Provenance

What makes this more than a historical curiosity is how much structure these recipes had. They included ingredients, preparation steps, and in some cases, a kind of outcome expectation — even without exact measurements. It’s a primitive, analog version of a standardized protocol. Which is, honestly, the same debate we’re having around model documentation today.

Also: some of the dishes were recently recreated by food historians. Verdict? “Surprisingly edible.” Which is more than you can say about some AI model outputs.

Clay Tablets = Cookbooks = Governance Docs?

It’s a stretch, but not a big one. Recipes are a form of knowledge encoding — passed on, interpretable, repeatable, and valuable. Like good documentation, they’re only useful if someone can actually follow them.

Have a favorite old recipe (or governance metaphor hiding in a cookbook)? Share it with me — extra points if it’s older than your CRM.

🖼️ The Face That Follows You

In 1649, Swedish engraver Claude Mellan created a haunting image of the Sudarium of Saint Veronica — the cloth that, legend says, bears the true face of Christ. What’s wild? Mellan did it using one single spiraling line. No crosshatching. No erasing. Just one continuous stroke, from the tip of the nose outward.

Nothing but lines and vibes

Zoom in, and it’s just lines. Zoom out, and the face feels like it’s looking back at you. It’s weirdly digital for a 17th-century artifact — like a proto-vector file that stares into your soul. Also: a great reminder that the format of the message matters just as much as the content.

Seen any other historical pieces that feel like accidental governance art? Send them over — I’m on the lookout.

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.