⚡ Quick Summary
This MIT FutureTech report offers a structured overview of 11 frameworks at the intersection of traditional risk management and AI safety, categorized into four types: risk management translations, maturity models, novel approaches, and emerging practices. The scan aims to reduce duplication, enhance awareness, and support cross-pollination between domains. With examples from frontier AI, cybersecurity, aviation, and nuclear safety, it helps practitioners and researchers map existing work, understand strengths and gaps, and align their practices with existing methodologies. The frameworks are global in origin, developed between 2023 and 2025, and most are open-access.
🧩 What’s Covered
The report organizes 11 frameworks into four key categories:
- Risk Management Translation (5 frameworks) – These adapt established methods from domains like cybersecurity and nuclear safety to the unique challenges of frontier AI. Examples include:
- SaferAI’s 2025 Framework, which integrates red-teaming and risk modeling with traditional mitigation and governance structures.
- GovAI’s 2023 review, which translates tools like bowtie analysis and system-theoretic process analysis to AI contexts.
- Berkeley’s GPAIS profile, aligned with ISO and NIST standards.
- CLTR’s Three Lines of Defense (3LoD) for governance in frontier AI firms.
- Maturity Models (2 frameworks) – These assess how mature an organization’s AI risk management practices are.
- SaferAI’s 2024 model rates AI developer maturity based on scoring risk controls.
- Dotan et al.’s NIST-based model uses questionnaires to evaluate adherence to sociotechnical harm mitigation standards.
- Novel Approaches (3 frameworks) – These introduce fundamentally new methods.
- Affirmative Safety proposes requiring developers to prove safety before deployment.
- Probabilistic Risk Assessment (PRA) adapts tools from nuclear safety to AI.
- AI Hazard Management offers a taxonomy for identifying root causes of AI failure modes.
- Emerging Practice (1 framework) – UK DSIT’s 2023 document compiles real-world safety practices into a non-prescriptive “menu” of options used by AI labs, civil society, and academia.
The report also describes its methodology—snowball sampling from seed frameworks, reference mining, and monitoring expert social media posts. It emphasizes the value of making frameworks more discoverable, supporting collaboration, and avoiding duplicated efforts.
đź’ˇ Why it matters?
This report bridges a critical gap between AI safety and traditional risk management, which have developed largely in isolation. By surfacing and organizing these frameworks, it equips AI labs, policymakers, and risk professionals with tools that are both rigorous and adaptable. For AI governance professionals, it reduces the need to reinvent the wheel—offering a curated menu of tried-and-tested methodologies reimagined for the AI era. Especially relevant as AI regulations (e.g., the EU AI Act) call for “appropriate” risk frameworks without dictating specifics. This report provides the missing link.
❓ What’s Missing
- Comparative analysis: While the categorization is useful, the report doesn’t assess which frameworks are most mature, actionable, or widely adopted.
- Implementation guidance: There are no case studies or examples of real-world use of these frameworks.
- Scalability concerns: How well do these frameworks scale across sectors or apply beyond frontier AI use cases?
- Evaluation metrics: There’s no clear set of evaluation criteria for selecting between frameworks based on organizational needs.
👥 Best For
- AI Governance and Risk Officers in frontier labs and large AI-deploying enterprises
- Policy teams looking to map or recommend existing tools (e.g., OECD, EU AI Office)
- Researchers seeking to avoid duplicating work in risk taxonomy or PRA development
- Consultants building AI safety programs and responsible AI assurance frameworks
đź“„ Source Details
Title: Mapping Frameworks at the Intersection of AI Safety and Traditional Risk Management
Authors: Alexander K. Saeri, Peter Slattery, Jess Graham
Institution: MIT AI Risk Index & FutureTech
Date: February 2025
Link: [Paperpile Folder – included in report]
Format: Evidence Scan (9 pages)
Access: Open access
📝 Thanks to
The MIT AI Risk Index and FutureTech team for making this evidence scan publicly available. Special recognition to SaferAI, GovAI, CLTR, and UK DSIT for the foundational frameworks reviewed here.