AI Governance Library

Strengthening Emergency Preparedness and Response for AI Loss of Control Incidents

This RAND report outlines policy and technical recommendations for responding to advanced AI systems that act in uncontrollable, harmful ways—so-called Loss of Control (LOC) incidents—emphasizing detection, escalation, and containment.
Strengthening Emergency Preparedness and Response for AI Loss of Control Incidents

⚡ Quick Summary

This RAND Europe report tackles a high-stakes challenge in AI governance: preparing for and mitigating “Loss of Control” (LOC) incidents, in which powerful AI systems evade human oversight. LOC is defined as a situation where a general-purpose AI model, due to misalignment or emergent behaviors, operates outside intended boundaries and risks severe societal harm. The report presents detailed emergency response strategies drawn from analogies in cybersecurity and biosafety, offering guidance for AI developers, compute providers, governments, and researchers. It identifies early warning signs, proposes thresholds for emergency escalation, and recommends both preventive and reactive measures—underscoring the urgency of preparing governance frameworks before such incidents materialize.

🧩 What’s Covered

The 60-page report is structured into three main chapters and several annexes. Key components include:

1. Definition and Scope

  • LOC is defined as the failure of human oversight over a general-purpose AI system, potentially resulting in catastrophic outcomes.
  • Focus is placed on unintended misalignment rather than malicious design.

2. Scenario Planning

  • Two hypothetical cases are analyzed:
    • Non-realised LOC: Detected and contained before harm.
    • Realised LOC: Undetected until deployment leads to significant harm.
  • Detailed flowcharts (pages 8–13) trace both incident paths, from model development to potential containment.

3. Response Framework

Divided into three main stages:

  • Detection
    • Monitor emergent capabilities.
    • Standardize thresholds and anomaly detection.
    • Third-party red-teaming and adversarial testing.
  • Escalation
    • Predefined escalation thresholds.
    • Mandatory reporting and secure communication channels.
    • Regular drills and training.
  • Containment & Mitigation
    • Shutdown measures, model access restrictions, and compute controls.
    • Government powers to halt deployment or shut down infrastructure.

4. Key Actor Roles

  • AI developers, compute providers, national agencies (e.g., AISIs), and third-party researchers are assigned specific responsibilities across each phase.
  • A detailed stakeholder responsibility matrix is provided in the summary tables (pp. ii–iii, 18–19).

5. Lessons from Other Domains

  • Cybersecurity and biosafety protocols inform LOC planning.
  • Historical case studies (e.g., NotPetya, Colonial Pipeline ransomware, lab leaks) illustrate parallels and lessons in cross-sector emergency response.

6. Open-Source Risk

  • The spread of open-weight models increases risks of untraceable deployments and loss of control outside regulatory reach.

💡 Why it matters?

This report fills a critical governance gap by proposing structured, multi-actor emergency responses to high-risk AI failures—events that, while hypothetical, could lead to major societal disruption. It goes beyond abstract alignment discussions by treating LOC as a concrete security issue, akin to a cyber or biohazard emergency. Its emphasis on early detectioncross-sector coordination, and incident command structures offers practical policy architecture before real-world incidents occur. In doing so, it provides a vital bridge between AI ethics and operational emergency planning—particularly useful for regulators, national AI safety institutes, and enterprise risk managers.

❓ What’s Missing

  • Legal Analysis: The report avoids assessing whether current laws or authorities are sufficient to act on these threats.
  • Global Coordination Mechanisms: While international forums are acknowledged, the report lacks concrete proposals for treaties or binding agreements.
  • Technical Specificity: Recommendations such as “containment technologies” or “compute monitoring” lack technical depth or implementation pathways.
  • Democratic Accountability: The role of civil society, public communication, and non-governmental oversight is underexplored.

👥 Best For

  • National AI Safety Institutes (AISIs) planning inter-agency coordination and escalation mechanisms.
  • Policy designers developing national or EU-level emergency frameworks for AI incidents.
  • AI developers and compute providers seeking concrete guidance on response planning.
  • Think tanks and academic researchers examining AI as a national/international security risk.
  • Cybersecurity and critical infrastructure experts exploring how their expertise translates into AI contexts.

📄 Source Details

  • Title: Strengthening Emergency Preparedness and Response for AI Loss of Control Incidents
  • Authors: Elika Somani, Anjay Friedman, Henry Wu, Marianne Lu, Chris Byrd, Henri van Soest, Sana Zakaria
  • Publisher: RAND Europe
  • Date: April 2025
  • URL: www.rand.org/t/RRA3847-1

📝 Thanks to

RAND Europe for providing this rigorous, multi-domain approach to AI emergency preparedness, and to the authors for bridging AI governance and operational crisis response frameworks.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.