AI Governance Library

AI Verify Testing Framework (for Traditional and Generative AI)

AI Verify testing framework aims to help companies assess their AI systems against 11 internationally recognised AI governance principles… Each item consists of outcomes, processes, and evidence to validate implementation.
AI Verify Testing Framework (for Traditional and Generative AI)

⚡ Quick Summary

The AI Verify Testing Framework is a highly operational, audit-ready governance toolkit developed by Singapore’s IMDA and the AI Verify Foundation. It translates high-level AI ethics principles into concrete, testable requirements across 11 domains, including transparency, safety, fairness, and accountability. What makes this framework stand out is its structure: every requirement is broken down into outcomes, processes, and evidence—making it directly usable for internal compliance, external audits, and regulatory alignment. It is equally applicable to traditional ML systems and generative AI, with tailored controls where necessary. In practice, this is less a “framework” and more a full governance checklist that bridges policy and implementation.

🧩 What’s Covered

The framework is built around 11 core principles aligned with global standards (OECD, EU, ASEAN, US), including transparency, explainability, reproducibility, safety, security, robustness, fairness, data governance, accountability, human oversight, and societal impact.

Each principle is operationalised through a consistent structure:

  • Outcome: what the organisation should achieve
  • Process: actions required to reach the outcome
  • Evidence: documentation or metrics proving compliance

This design turns abstract governance into auditable controls.

The framework goes deep into implementation detail. For example:

  • Transparency includes requirements like AI disclosure, model cards, user communication, and incident reporting.
  • Explainability covers feature attribution methods, model selection trade-offs, and XAI techniques.
  • Reproducibility focuses on version control, data lineage, logging, and audit trails across the ML lifecycle.
  • Safety introduces red-teaming, materiality assessments, risk thresholds, and safeguards for harmful outputs.
  • Security aligns with established practices (e.g., CIA triad, supply chain security, incident response).
  • Robustness includes adversarial testing, drift monitoring, and real-world validation.
  • Fairness provides both technical metrics (e.g., parity measures) and governance processes (e.g., stakeholder consultation).
  • Data Governance emphasizes provenance, quality, regulatory compliance, and IP considerations.
  • Accountability defines governance structures, AI policies, supplier oversight, and lifecycle auditability.
  • Human Oversight operationalises human-in-the-loop design and escalation pathways.
  • Societal Impact extends into environmental and social assessments.

Importantly, the framework integrates technical testing (TEVV) with process governance, making it one of the few resources that truly connects engineering validation with compliance documentation.

💡 Why it matters?

This framework solves one of the biggest gaps in AI governance: the translation of principles into evidence. Many organisations understand what “responsible AI” means conceptually—but struggle to prove it. AI Verify provides a ready-made structure for demonstrating compliance, especially in regulated environments or under the EU AI Act.

It is particularly valuable because it aligns governance with audit logic. Instead of asking “Do we have fairness?”, it asks: What is the metric? What process ensures it? What evidence proves it? That shift is critical for organisations moving from policy to implementation.

It also reflects a growing global trend: governance frameworks are converging toward testability and assurance, not just principles. In that sense, AI Verify is a preview of what future certification and conformity assessments may look like.

❓ What’s Missing

The framework is extremely detailed, but that is also its limitation. It lacks prioritisation or risk-tiering guidance—everything is presented with similar weight, which can overwhelm smaller organisations.

It also does not clearly map controls to specific regulatory obligations (e.g., EU AI Act articles), which would significantly increase its usability in legal contexts.

Another gap is operational integration: while it defines “what” to do, it provides limited guidance on “how to implement this efficiently” (e.g., tooling, automation, governance workflows).

Finally, while generative AI is included, some sections still feel adapted from traditional ML thinking rather than fully reimagined for agentic or autonomous systems.

👥 Best For

AI governance leads building internal control frameworks
Compliance and risk teams preparing for audits or certification
AI product teams needing structured governance checklists
Regulators and auditors assessing organisational AI maturity

Less suitable for early-stage startups or purely conceptual learning

📄 Source Details

Developed by the Infocomm Media Development Authority (IMDA) and AI Verify Foundation (Singapore), 2025 edition.

📝 Thanks to

IMDA and AI Verify Foundation for one of the most operational AI governance frameworks currently available

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.