đ Quick Summary
This paper argues for applying the widely used âThree Lines of Defenseâ (3LoD) risk management modelâlong a staple of the financial sectorâto AI development. The author shows how the model could clarify roles, avoid coverage gaps, and improve accountability in frontier AI companies like OpenAI and DeepMind. Itâs a deeply informed, realistic push to bring structured governance to a sector thatâs often strong on ethics statements and weak on role clarity.
đ Whatâs Covered
- 3LoD model basics: The IIAâs 2020 update anchors the discussion. Roles span three groups:
- First line (e.g. product and research teams) manage risk directly.
- Second line (e.g. legal, compliance, governance teams) offer oversight and support.
- Third line (e.g. internal audit) provides independent assurance to the board.
- Concrete guidance: The paper maps the 3LoD to AI-specific teams, including CTO/CSO, legal teams, red teams, and internal auditors. It includes sample org charts, risk tools (e.g. RLHF, risk matrices, model cards), and reporting lines.
- Risk rationale: Schuett explains how the model can reduce societal harm by
- Identifying gaps in responsibility,
- Improving ineffective risk processes, and
- Enabling boards to supervise management credibly.
- Critiques addressed: While acknowledging common criticismsâlike bureaucratic overhead, audit independence, or misaligned incentivesâthe paper defends the model as a pragmatic compromise between structure and flexibility.
- Contextual fit: Focuses on frontier model developers (e.g. OpenAI, Anthropic), arguing the model suits their complex, research-heavy orgs. Also nods to potential regulatory uptake (e.g. NIST RMF, EU AI Act).
đĄ Why It Matters
AI risk governance is full of high-minded frameworksâbut few make it into org charts or C-suite KPIs. This paper changes that. It offers clear, operational structure for assigning who does what, when, and how in managing AI risk.
This is especially urgent for labs developing high-capability systems. As safety incidents, regulatory pressure, and societal expectations mount, these companies need more than responsible AI principlesâthey need internal accountability mechanisms that boards and investors understand.
And the 3LoD model already has credibility across regulated sectors. Itâs not exotic. Thatâs part of its strength.
For AI policy leaders, this model offers a way to make organizational governance auditable. For corporate risk teams, itâs a roadmap to integrate AI risk into existing ERM systems.
đ§© Whatâs Missing
- Small org adaptation: The paper centers large labs and Big Tech. Startups and nonprofits will need stripped-down templates.
- Empirical proof: Itâs a normative argument. Schuett openly notes the lack of rigorous evidence that the 3LoD works in AI contexts.
- Automation insight: The paper doesnât tackle how LLMs or agents might themselves affect internal governanceâe.g. audit automation, AI-led compliance.
- Public engagement: Focuses inward. Thereâs little on civil society, whistleblowing, or public trust mechanisms.
đ„ Best For
- AI leadership teams building internal risk governance
- Boards and audit committees that want credible lines of accountability
- Regulators and standard-setters exploring enforceable governance frameworks
- Academic researchers working at the org-theory/AI interface
đ Source Details
Title: Three Lines of Defense Against Risks from AI
Author: Jonas Schuett (GovAI, Legal Priorities Project, Goethe University)
Published in: AI & Society, Springer, 2024
DOI: 10.1007/s00146-023-01811-0
License: CC BY 4.0
Length: ~14,000 words / 23 pages
đ Thanks to Jonas Schuett for connecting AI governance to tested practices in internal risk management. This is one of the most grounded contributions yet on how to make accountability for AI systems more than a slogan.