AI Governance Library

Model Risk Management (Certification Scheme V1.5 – ForHumanity)

A certification framework to assess and assure the responsible governance of AAA (Artificial Intelligence, Algorithmic, Autonomous) systems in financial institutions, aligned with BASEL III, SR11-7, and modern AI regulations.
Model Risk Management (Certification Scheme V1.5 – ForHumanity)

⚡ Quick Summary

ForHumanity’s Model Risk Management Certification Scheme v1.5 extends traditional financial model governance to the realm of AAA systems—AI, algorithmic, and autonomous technologies. Tailored for regulated financial institutions, the scheme adapts legacy frameworks like BASEL III and US supervisory guidance (SR11-7, SR13-19) to the distinct risks posed by AI deployments in high-stakes financial environments. It integrates with ForHumanity’s modular certification ecosystem and mandates alignment with a foundational CORE AAA Governance scheme. The binary (compliant/non-compliant) structure supports independent audits and creates a transparent “infrastructure of trust,” helping financial actors demonstrate rigorous AI risk management to regulators and the public alike.

🧩 What’s Covered

The document outlines a comprehensive certification framework comprising:

  • Scope & Applicability: Targets regulated financial institutions using AAA systems for tasks governed by BASEL II/III and US regulators (e.g., OCC, Fed). It excludes systems prohibited by law and includes General Purpose AI if used in production settings.
  • Integration with CORE Governance: Certification requires prior compliance with ForHumanity’s CORE AAA Governance, covering 16 governance pillars including expert oversight, explainability, decommissioning, and change management (illustrated on page 6).
  • Modular Compliance Design: Enables layering of certifications (e.g., GDPR, Children’s Code) for multi-jurisdictional harmonization.
  • “Infrastructure of Trust”: Mimics financial audits by ensuring segregation of duties, high auditor standards, and rigorous documentation across lifecycle stages.
  • Audit Process: Annual re-certification, scope definition via the “Target of Evaluation,” and criteria-based evaluation with standardized evidence types (e.g., contracts, logs, public disclosures).
  • Terminology & Definitions: Extensive glossary (~6 pages) covers terms like Algorithmic Risk Committee, Causal Hypothesis, Residual Risk, and Explainability+, ensuring interpretability and harmonization.
  • Criteria Catalog: The bulk of the document (~30+ pages) presents binary audit criteria across 20+ categories such as:
    • Top Management and Oversight Bodies: Governance structures, training, risk acceptance protocols.
    • Model Risk Management: Validation reports, conceptual soundness, resource sufficiency, record-keeping.
    • Data Management & Explainability: Requirements for UX, rights exercises, traceability, and disclosures.
    • Human Oversight: Thresholds for pause/reversal mechanisms and fiduciary duties.
    • Decommissioning: Criteria to sunset systems when risk exceeds thresholds.

Each criterion includes evaluation methods (e.g., internal logs, procedure manuals, public disclosures) and sometimes specific procedural instructions (e.g., ethical thresholds, stakeholder duty mappings).

💡 Why it matters?

This scheme offers a robust and implementable pathway for aligning financial AI systems with emerging legal, ethical, and technical expectations. As AI in finance rapidly evolves, institutions face growing scrutiny over opaque, potentially harmful decision-making. This framework not only bridges traditional model governance (SR11-7, BASEL III) with AI-specific risks but also enables independent audits—addressing the accountability gap highlighted by regulators worldwide. By operationalizing explainability, de-risking system drift, and embedding human oversight, it supports sustainable AI adoption. Most critically, it empowers institutions to prove legal and ethical conformity in high-risk deployments like credit scoring, fraud detection, and automated trading.

❓ What’s Missing

  • Implementation Examples: The scheme lacks real-world case studies or illustrations showing how institutions successfully applied the certification in practice.
  • Crosswalk Tables: There is no mapping of criteria to specific articles of laws like the EU AI Act or SR11-7, which would aid regulatory alignment.
  • Tooling Guidance: There are no recommended tools, platforms, or templates for logging, monitoring, or evidence generation.
  • Sector-Specific Adaptation: While tailored to finance, the scheme could benefit from industry-specific annexes (e.g., insurance vs. retail banking).
  • AI Lifecycle Models: More explicit integration with standard AI lifecycle models (e.g., CRISP-DM, ISO/IEC 22989) would improve applicability.

👥 Best For

  • Risk & Compliance Officers in financial institutions deploying AI
  • Internal Audit and Model Validation Teams
  • Third-Party Auditors & Certifying Bodies
  • AI Ethics Committees within banks or FinTechs
  • Regulatory Policy Analysts shaping supervisory practices
  • Consultants helping banks navigate AI assurance and certification

📄 Source Details

  • TitleModel Risk Management Certification Scheme v1.5
  • Publisher: ForHumanity, a 501(c)(3) public charity
  • Release: 2022–2024
  • Length: 60 pages
  • Availability: Creative Commons license (CC BY-NC-ND)
  • Accesshttps://forhumanity.center

📝 Thanks to

The ForHumanity community of over 2,900+ contributors, and especially the teams working on SR11-7 adaptation, CORE AAA Governance, and Independent Audit of AI Systems.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.