AI Governance Library

AI Verify Testing Framework

This resource presents one of the most comprehensive, actionable testing frameworks for AI governance in practice today. Developed by Singapore’s Infocomm Media Development Authority (IMDA) in collaboration with the AI Verify Foundation.
AI Verify Testing Framework

📖 What’s Covered

The framework is structured around 11 AI governance principles:

  1. Transparency
  2. Explainability
  3. Repeatability/Reproducibility
  4. Safety
  5. Security
  6. Robustness
  7. Fairness
  8. Data Governance
  9. Accountability
  10. Human Agency & Oversight
  11. Inclusive Growth, Societal & Environmental Well-being

Each principle is broken down into:

  • Outcomes: What good looks like.
  • Processes: Actions required to achieve that outcome.
  • Evidence: Documentation and artifacts needed to validate that the process is in place.

The document spans over 100 pages, offering granular process checks and documentation prompts for both traditional and generative AI contexts, including hallucination testing, red teaming, model versioning, auditability, and impact assessments.


💡 Why It Matters?

This framework goes well beyond policy aspirations. It turns AI governance principles into repeatable, verifiable actions and documentation—a huge step for teams implementing governance under regulatory pressure. It’s especially valuable for:

  • Demonstrating readiness for audits under the EU AI Act or other regional frameworks.
  • Aligning development practices with global norms (OECD, NIST, ASEAN, etc.).
  • Introducing structured oversight to high-risk or foundation model development.
  • Enabling external auditing and internal compliance tracking without guesswork.

Its relevance to both traditional and generative AI makes it a rare hybrid tool that bridges legacy systems and emerging frontier technologies like LLMs.


🧩 What’s Missing?

While the framework is extremely detailed, it assumes a relatively mature level of organizational capacity:

  • Small teams or startups may find the depth overwhelming without further guidance or templates.
  • Some sections (e.g., Inclusive Growth or Societal Well-being) still rely on subjective interpretations without robust metrics or scoring rubrics.
  • No automation layer is provided—using the framework still requires manual effort to complete checklists and document evidence.

👥 Best For

  • Compliance leads, legal teams, and risk officers looking for auditable controls.
  • Product and engineering teams who need to build governance-by-design into system development.
  • External auditors seeking structured, internationally aligned evaluation tools.
  • AI Act implementers in organizations preparing for high-risk AI use cases under upcoming EU regulation.

📚 Source Details

Title: AI Verify Testing Framework – For Traditional and Generative AI
Publisher: Info-communications Media Development Authority (IMDA) & AI Verify Foundation
Year: 2025
Length: 118 pages

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.