AI Governance Library

Open Problems in Frontier AI Risk Management

Frontier AI both amplifies existing risks and introduces qualitatively novel challenges… emerging safety practices are often misaligned with established risk management frameworks.
Open Problems in Frontier AI Risk Management

⚡ Quick Summary

This research paper is an agenda-setting document that maps unresolved challenges in managing risks from frontier AI systems. Instead of proposing solutions, it systematically identifies where current risk management approaches break down when applied to general-purpose AI. The authors structure these “open problems” across the full risk lifecycle—from planning to mitigation—and classify them into three categories: lack of consensus, misalignment with existing frameworks, and implementation gaps. The report’s core value lies in clarifying where progress is needed and which actors (developers, regulators, researchers) should take responsibility, making it a foundational reference for aligning AI safety and governance efforts.

🧩 What’s Covered

The report follows a classical risk management structure (inspired by ISO 31000 and ISO/IEC 23894), but stress-tests each stage against frontier AI realities. It covers five key phases: risk planning, identification, analysis, evaluation, and mitigation .

In risk planning, it highlights challenges in defining scope, intended vs unintended use, and acceptable risk thresholds—especially given unpredictable deployment contexts. In risk identification, it shows how current approaches over-focus on model capabilities while neglecting affordances, human interaction, and system context .

The risk analysis section dives into limitations of capability evaluations, lack of reproducibility, and difficulties linking model performance to real-world harm . It also raises issues around integrating diverse data sources (evaluations, incidents, usage data) into coherent risk models.

In risk evaluation, the report surfaces problems with defining and applying risk acceptance criteria, aggregating risks, and making deployment decisions under uncertainty .

Finally, risk mitigation addresses challenges in documentation, monitoring, incident reporting, and ecosystem-level coordination, noting that many practices increase transparency without actually reducing risk .

Across all stages, the report identifies who should act and categorizes each problem type, creating a structured research and governance agenda.

💡 Why it matters?

Most current frameworks (ISO, NIST, EU AI Act) assume relatively stable systems and measurable risks. This report shows that frontier AI breaks those assumptions.

It shifts the conversation from “what frameworks exist” to “where they fail in practice.” That’s critical for anyone building or auditing AI governance systems today.

Instead of adding another framework, it creates a shared map of uncertainty—highlighting where consensus is missing, where standards don’t fit, and where implementation simply isn’t working.

For governance professionals, this is a reality check: compliance alone won’t solve frontier AI risks. The real challenge lies in ambiguity, evolving systems, and decision-making under deep uncertainty.

❓ What’s Missing

The report intentionally avoids proposing solutions, which limits its immediate operational usefulness.

It also focuses heavily on safety risk, leaving out broader governance dimensions such as economic, legal liability, or organizational incentives.

Another gap is prioritization—while many open problems are listed, there is no clear hierarchy of which ones are most critical or urgent.

Finally, the document assumes a relatively mature governance audience; less experienced readers may struggle to translate these “open problems” into concrete actions.

👥 Best For

AI governance professionals designing risk frameworks

Policy-makers and regulators shaping frontier AI oversight

AI safety researchers looking for structured research agendas

Organizations building internal risk management processes for advanced AI systems

📄 Source Details

Research paper (February 2026), produced by a multi-institutional group including Oxford Martin AI Governance Initiative, MIT, Stanford, and others

📝 Thanks to

Marta Ziosi, Miro Plueckebaum, Stephen Casper, Robert Trager, and contributing researchers across leading AI governance and safety institutions

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.