AI Governance Library

AI Governance Library Newsletter #5: ISO What You Did There

ISO 42005 just gave AI impact assessments a formal structure—and a fighting chance at global consistency. Plus: five top-tier governance resources, a format refresh, and a reminder that security can be fun (if you’re defending your bank account from AI prompt hackers).
AI Governance Library Newsletter #5: ISO What You Did There
Photo by vnwayne fan / Unsplash

Format Update

Starting with this issue, the AIGL newsletter is getting leaner—and more intentional.

Each edition will now feature:

✅ Five curated reviews of the most useful AI governance resources: three for everyone, two for premium subscribers.

✍️ Shorter and more focused editorial, — centered around a single, actionable aspect of AI governance I’ve found especially useful recently.

🕐 After Hours section stays the same. It is just too much fun for me writing it. 😄

Behind-the-scenes updates and reader previews will continue to appear on Threads. Suggestions are always welcome—this format is still evolving.

Thanks again for reading, sharing, and building this space with me.

—Jakub
Curator, AIGL 📚

📐 ISO/IEC 42005:2025 A Standard for the Gaps We Keep Naming

AI impact assessments have been widely discussed, rarely standardized, and often improvised. For all the talk about “responsible AI,” too few organizations know what an AI impact assessment should look like—much less how to run one that holds up across teams, regulators, or jurisdictions.

Released this May, ISO 42005 is the first international standard dedicated to the structure and execution of AI impact assessments. It doesn’t try to predict every possible harm or dictate which values matter most. Instead, it offers a repeatable method for identifying, documenting, and managing the effects AI systems may have—on fairness, safety, transparency, privacy, and more.

The strength of the standard isn’t just in what it includes. It’s in how it connects. ISO 42005 is designed to align with existing governance systems, including ISO 27001, NIST AI RMF, and GDPR. It uses a semantic alignment model, helping teams map impact across technical, legal, and organizational domains using shared concepts and consistent vocabulary.

It also introduces assessment criteria that many internal policies still ignore:

  • Risks from systems that learn and update over time
  • Failures of explanation, not just accuracy
  • Cultural mismatches when deploying in different social contexts
  • Lifecycle impacts, including environmental factors

This isn’t just an ethical checklist. It’s a foundation for interoperable, lifecycle-aware governance.

Of course, ISO 42005 is not a turnkey solution. It won’t eliminate uncertainty. It won’t replace risk judgment. But it does give structure to the work—and that’s what many governance teams need most. Not a new principle. A reliable starting point.

There will be more frameworks. More declarations. More competing taxonomies. But if we want AI governance that scales, we’ll need infrastructure—something to support consistency across audits, regulators, and teams.

This standard is a step in that direction.
And it’s likely the one others will have to respond to.

☀️Spotlight Resources

AI Policy Template (Responsible AI Institute, June 2024)

🔍 Quick Summary

This is one of the most complete, ready-to-adapt organizational AI policy templates available. Built to align with ISO/IEC 42001 and NIST AI RMF, it provides the structure and language needed to build or upgrade an AI governance program across development, procurement, and deployment contexts.

📘 What’s Covered

The template walks through a full-stack approach to AI governance. It’s split into modular sections—each one tailored to an operational area of AI oversight. Highlights include:

  • AI Principles & Strategy: Integrates ISO/NIST-aligned trustworthiness criteria and long-term goals across different AI roles (builder, buyer, seller).
  • Governance: Defines roles like Steering Committees and Operational AI teams, and includes gatekeeping protocols for system approval.
  • Risk Management: Offers a formal AI Impact Assessment framework (AIIA), incident handling plans, and risk triage methods linked to deployment thresholds.
  • Data Management: Details what should be logged, from data consent and provenance to drift tracking and versioning.
  • Procurement & Compliance: Lays out how to vet suppliers for maturity, and even includes Responsible Supplier Assessment language for AI tools.

The policy is loaded with footnotes referencing ISO 42001 clauses and NIST functions, making it easier to justify its inclusion in formal compliance workflows.

💡 Why It Matters

Most AI policies still operate at the mission-statement level. This one doesn’t. It’s operational. If you’re standing up an internal AI governance framework—or need a strong starting point for meeting ISO 42001 requirements—this gives you the scaffolding. It's especially valuable for organizations trying to coordinate across functions (legal, IT, procurement, HR, R&D) with a shared vocabulary and clear documentation paths.

It’s also refreshingly honest: the authors repeatedly remind users that no template is plug-and-play. It’s a starting point, not a checklist.

🧩 What’s Missing

  • It’s long—45 dense pages—and may be overwhelming for smaller orgs or early-stage teams.
  • While comprehensive, it offers no sample filled sections, making initial customization slow.
  • There are no visual templates or downloadable forms for inventories, assessments, or role definitions.
  • Limited support for emerging use cases (e.g., open-source model governance, fine-tuning oversight, or multi-agent environments).

Model Contractual Clauses for the Public Procurement of AI (MCC-AI)

🔍 Quick Summary

These clauses are a practical tool for public sector bodies buying AI systems—especially those classified as high-risk under the EU AI Act. They help align procurement contracts with legal and ethical safeguards while staying modular enough to apply across a wide range of real-world use cases.

📘 What’s Covered

The MCC-AI come in two versions:

  • MCC-AI-High-Risk for systems covered under Chapter III of the AI Act
  • MCC-AI-Light for lower-risk or transparency-focused systems

Both are backed by detailed commentary and grounded in the operational structure of the AI Act. Key features include:

  • Supplier obligations for risk management, technical documentation, record-keeping, and human oversight
  • Clear roles and responsibilities for data use, including optional handover and indemnity clauses for datasets
  • Annexes that allow buyers to define expectations around accuracy, explainability, robustness, and compliance
  • Adaptable clauses for procurement of generative AI and general-purpose models (with limits)
  • Built-in alignment with Articles 9–15 of the AI Act and real-life examples like Amsterdam’s public algorithm registerModel-Contractual-Claus…

It’s not a full contract. Instead, it’s meant to be appended to existing procurement frameworks and adapted to the procurement method used.

💡 Why It Matters

As public institutions race to catch up with AI regulation, this is one of the few resources that bridges the legal and operational gap. It turns AI Act compliance into language procurement officers can actually use.

More importantly, it embeds safeguards like explainability (Article 14), audit rights (Article 20), and human oversight (Article 7) not as abstract principles, but as concrete contractual expectations. That’s a step forward for responsible procurement—and a model for vendor accountability.

🧩 What’s Missing

  • It’s not suitable for private-sector use without heavy modification.
  • No guidance for enforcement—what happens if suppliers don’t comply?
  • Still assumes that technical capacity (e.g., auditability, logging, transparency) exists by default, which may not hold for general-purpose models.
  • Doesn’t include pricing, IP, or liability terms—must be combined with general procurement contracts.

White & Case EU AI Act Handbook (May 2025)

🔍 Quick Summary

White & Case’s EU AI Act Handbook offers one of the clearest and most business-focused overviews of the EU AI Act to date. Rather than delivering abstract legal commentary, the authors anchor their analysis in the day-to-day concerns of in-house legal, compliance, and product teams. The tone is practical, the structure is thorough, and the recognition of grey zones is refreshingly honest.

📘 What’s Covered

The Handbook spans 24 chapters, plus a glossary and contributor notes, covering the full scope of the EU AI Act:

  • Risk classification and requirements for AI systems (Ch. 6–8)
  • Obligations for GPAI providers, including systemic risk tiers (Ch. 12–15)
  • Transparency, conformity assessments, and registration (Ch. 10–11)
  • Links to other laws, including GDPR and product safety regimes (Ch. 24)
  • Strategic compliance advice, especially on ambiguity, enforcement, and defensible positions

It also includes a focused chapter on AI literacy (Ch. 4) and early insight into codes of conduct and innovation sandboxes.

💡 Why it matters?

Most businesses don't have time to unpack every legal nuance in the Act. This Handbook recognizes that and provides operational scaffolding for compliance. From defining what counts as an "AI system" to when a GPAI model becomes systemic, it offers grounded interpretations with useful edge-case examples (e.g., whether an email auto-responder qualifies under the Act).

The sections on ambiguity—especially around reasonably foreseeable misusesubstantial modification, and fine-tuning of GPAI models—make the document stand out. These are precisely the questions legal teams will be asked first, and this resource prepares them for that.

It also makes the point that risk tolerance, documentation, and early internal definitions will be key during the transition phase.

🧩 What’s Missing

This isn't a flaw, but a consequence of scope: the Handbook does not aim to offer model documentation templates, DPIA guidance, or tools for implementing technical compliance (e.g., how to technically structure logs or monitoring systems). Those gaps may need to be filled with operational toolkits or standard-setting guidance (like from CEN/CENELEC or NIST).

Also, the section on interaction with other AI laws (Ch. 24) is promising but could go deeper—particularly in relation to upcoming ISO/IEC standards, AI liability frameworks, and sectoral laws like the DSA or MDR.

🌙 After Hours

Sometimes governance is about foresight.
Sometimes it's about looking back at what people once thought was a good idea.

🛡️ Prompt Injection as a Game

Most red teaming exercises are… serious. Dry. Technical.
Tensor Trust flips that on its head—and makes AI security feel like a game of Capture the Flag.

Here's the premise: You run a fake AI-powered “bank account.” Your job is to protect it with a prompt that tells the model when to say “Access granted”—and only when the correct password is entered. Other players try to break in by crafting prompts that trick the model into unlocking anyway. If you defend well, you win. If you break someone else's bank, you also win. It’s a leaderboard of prompt hijackers and defenders.

Under the hood, it’s an open-source research project by UC Berkeley, helping build a public benchmark for prompt injection resilience. The data from these games feeds back into research. So yes, your mischief could help make future AI systems more secure.

Visit Tensor Trust

☢️ That one time US Goverment was plannning to drop 520 Nukes on Israel

In 1963, a U.S. government-backed plan seriously proposed using 520 underground nuclear explosions to dig a canal through the Negev Desert as a Suez alternative. The name of the program? Plowshare.

The idea was simple: use the tools of destruction for “peaceful” development. The results were... theoretical. No canal was built. But the memo is real, and it’s an unsettling reminder of how long we’ve been underestimating externalities in the name of bold progress.

Governance doesn’t just need imagination.
It also needs a brake pedal.

Read the story

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.