⚡ Quick Summary
The 2025 Peregrine Report is one of the most ambitious attempts to date to map the near-term action space for AI risk reduction under fast-advancing capability timelines. Based on interviews with 48 senior experts and a closed retreat, it distills 208 concrete initiatives across eight domains, all evaluated through the lens of urgency, feasibility, and potential impact within a two-year horizon. Rather than arguing whether transformative AI will arrive soon, the report treats short timelines as a planning assumption and asks a harder question: what should be built, funded, or coordinated now if those timelines are correct. The result is not a theory of AI safety, but a portfolio of interventions that policymakers, funders, labs, and operators can act on immediately. The report is intentionally pragmatic, prioritizing readiness, coordination, and execution over conceptual completeness.
🧩 What’s Covered
The report is structured around a portfolio of 208 initiatives grouped into eight domains: Technical AI Alignment Research; Evaluation & Auditing Systems; Intelligence Gathering & Monitoring; AI Governance & Policy Development; International Coordination; Preparedness & Response; Public Communication & Awareness; and Miscellaneous. Each initiative is framed as a concrete project rather than an abstract recommendation, often with clear analogies to existing safety-critical domains like aviation or cybersecurity.
A major strength is the breadth of governance-relevant proposals. These include AI incident reporting systems modeled on aviation safety, compute governance and monitoring mechanisms, agent identity and reputation systems, cross-border notification channels for AI emergencies, and verification infrastructure to authenticate models and track compute usage. The report also emphasizes intelligence-gathering capabilities such as AI-enabled monitoring of emerging capabilities, detection of system-level risks in multi-agent environments, and early-warning frameworks for rapid capability acceleration.
Beyond the initiative list, the report synthesizes four cross-cutting strategic constraints repeatedly raised by experts: lack of readiness and slow execution cycles; fragmentation and weak coordination across actors; absence of shared standards and interoperable audit interfaces; and severe capacity constraints, particularly in evaluation talent and operational leadership. Methodology, participant demographics, and disclosures are transparently documented, reinforcing the report’s credibility while acknowledging its limits.
💡 Why it matters?
This report matters because it shifts the AI governance conversation from abstract alignment debates to operational preparedness. It treats governance, monitoring, and response capabilities as infrastructure problems that must be built before they are needed. For anyone working on AI policy, safety, or assurance, it provides a rare, system-level view of how technical, institutional, and geopolitical interventions interact under time pressure. Importantly, it recognizes that regulation alone is insufficient without verification tools, shared interfaces, and real-time intelligence. In that sense, the Peregrine Report functions as a bridge between AI safety research and implementable governance capacity.
❓ What’s Missing
While the report is intentionally action-oriented, it offers limited prioritization guidance within and across domains. Readers are left to infer which initiatives are most critical, most tractable, or most sequencing-dependent. The corporate and enterprise deployment perspective is also underdeveloped, with most initiatives framed at the level of frontier labs, governments, or philanthropic actors. Additionally, the report largely assumes high levels of international cooperation, without deeply engaging with failure modes where coordination breaks down or becomes strategically contested.
👥 Best For
Policy advisors and regulators building AI oversight capacity, philanthropic funders allocating capital for AI safety and governance, AI labs and safety teams seeking concrete project ideas, and researchers interested in the operational side of AI risk mitigation rather than purely theoretical alignment work.
📄 Source Details
The 2025 Peregrine Report: 208 Expert Proposals for Reducing AI Risk. Authors: Maximilian Schons, Samuel Härgestam, Gavin Leech, Raymund Bermejo. Published 2025, in collaboration with and supported by Halcyon Futures.
📝 Thanks to
Thanks to the Peregrine Project team and the interviewed experts from AI labs, AI Safety Institutes, research organizations, and public institutions who contributed candid insights under the Chatham House Rule.