AI Governance Library

Three Lines of Defense Against Risks from AI

A strategic proposal to professionalize AI risk governance.
Three Lines of Defense Against Risks from AI

🔍 Quick Summary

This paper argues for applying the widely used “Three Lines of Defense” (3LoD) risk management model—long a staple of the financial sector—to AI development. The author shows how the model could clarify roles, avoid coverage gaps, and improve accountability in frontier AI companies like OpenAI and DeepMind. It’s a deeply informed, realistic push to bring structured governance to a sector that’s often strong on ethics statements and weak on role clarity.

📘 What’s Covered

  • 3LoD model basics: The IIA’s 2020 update anchors the discussion. Roles span three groups:
    • First line (e.g. product and research teams) manage risk directly.
    • Second line (e.g. legal, compliance, governance teams) offer oversight and support.
    • Third line (e.g. internal audit) provides independent assurance to the board.
  • Concrete guidance: The paper maps the 3LoD to AI-specific teams, including CTO/CSO, legal teams, red teams, and internal auditors. It includes sample org charts, risk tools (e.g. RLHF, risk matrices, model cards), and reporting lines.
  • Risk rationale: Schuett explains how the model can reduce societal harm by
    1. Identifying gaps in responsibility,
    2. Improving ineffective risk processes, and
    3. Enabling boards to supervise management credibly.
  • Critiques addressed: While acknowledging common criticisms—like bureaucratic overhead, audit independence, or misaligned incentives—the paper defends the model as a pragmatic compromise between structure and flexibility.
  • Contextual fit: Focuses on frontier model developers (e.g. OpenAI, Anthropic), arguing the model suits their complex, research-heavy orgs. Also nods to potential regulatory uptake (e.g. NIST RMF, EU AI Act).

💡 Why It Matters

AI risk governance is full of high-minded frameworks—but few make it into org charts or C-suite KPIs. This paper changes that. It offers clear, operational structure for assigning who does what, when, and how in managing AI risk.

This is especially urgent for labs developing high-capability systems. As safety incidents, regulatory pressure, and societal expectations mount, these companies need more than responsible AI principles—they need internal accountability mechanisms that boards and investors understand.

And the 3LoD model already has credibility across regulated sectors. It’s not exotic. That’s part of its strength.

For AI policy leaders, this model offers a way to make organizational governance auditable. For corporate risk teams, it’s a roadmap to integrate AI risk into existing ERM systems.

đŸ§© What’s Missing

  • Small org adaptation: The paper centers large labs and Big Tech. Startups and nonprofits will need stripped-down templates.
  • Empirical proof: It’s a normative argument. Schuett openly notes the lack of rigorous evidence that the 3LoD works in AI contexts.
  • Automation insight: The paper doesn’t tackle how LLMs or agents might themselves affect internal governance—e.g. audit automation, AI-led compliance.
  • Public engagement: Focuses inward. There’s little on civil society, whistleblowing, or public trust mechanisms.

đŸ‘„ Best For

  • AI leadership teams building internal risk governance
  • Boards and audit committees that want credible lines of accountability
  • Regulators and standard-setters exploring enforceable governance frameworks
  • Academic researchers working at the org-theory/AI interface

📌 Source Details

TitleThree Lines of Defense Against Risks from AI

Author: Jonas Schuett (GovAI, Legal Priorities Project, Goethe University)

Published inAI & Society, Springer, 2024

DOI10.1007/s00146-023-01811-0

License: CC BY 4.0

Length: ~14,000 words / 23 pages

🙏 Thanks to Jonas Schuett for connecting AI governance to tested practices in internal risk management. This is one of the most grounded contributions yet on how to make accountability for AI systems more than a slogan.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.