⚡ Quick Summary
This Issue Brief by CSET offers a meta-level contribution to AI governance debates. Instead of proposing another governance model, it dissects existing frontier AI governance proposals by identifying the assumptions they rely on to function. The authors analyze five influential, U.S.-centric proposals from industry, civil society, academia, and government, asking what risks matter, who governs, and whether proposed tools can realistically achieve their goals. The core insight is that many disagreements in AI governance are not about values or objectives, but about implicit assumptions regarding institutional capacity, technical feasibility, and actor competence. By making these assumptions explicit, the report equips policymakers with a practical method to navigate uncertainty, build coalitions, and invest in shared prerequisites such as talent, auditing ecosystems, and information-sharing frameworks.
🧩 What’s Covered
The report introduces an assumption-based analytic framework tailored to AI governance proposals, structured around four guiding questions: why govern, what to govern, who governs, and how to govern. These questions are used to decompose five prominent proposals addressing frontier AI systems, defined as models at or beyond the current state of the art. The selected proposals span OpenAI’s internal frontier risk approach, the AI Now Institute’s Zero Trust AI Governance, an academic proposal on frontier AI regulation, California’s SB-1047, and a U.S. Senate framework on AI-enabled extreme risks.
A substantial portion of the report is devoted to identifying shared assumptions across these proposals. These include beliefs that frontier AI may pose catastrophic risks, that governments should hold primary oversight responsibility, and that effective governance depends on the availability of skilled public-sector talent, third-party auditors, and operationalizable standards. The analysis further categorizes assumptions into those concerning actor capacity, effectiveness of techniques (such as compute monitoring or watermarking), necessary processes (like risk management frameworks and incident reporting), and useful information or actions (including disclosures, licensing, and compute thresholds).
The report also highlights areas of weak consensus. While most proposals agree on the need for risk management frameworks and information sharing, there is far less alignment on which technical mechanisms actually work. Techniques such as watermarking, attribution of harms, or preventing model leakage are unevenly assumed to be effective. The document closes by examining unique assumptions, including OpenAI’s emphasis on deployment-stage risk management and SB-1047’s strong assumption that AI systems and compute access can be fully shut down in emergencies.
💡 Why it matters?
This report is especially valuable because it reframes AI governance debates away from slogans and toward implementation realism. By focusing on assumptions rather than prescriptions, it gives policymakers and governance practitioners a tool to stress-test proposals before adopting them. It also clarifies that many governance failures will stem not from poor intentions, but from overestimated institutional capacity, immature auditing ecosystems, or unproven technical controls. For anyone working on the EU AI Act, national AI strategies, or corporate AI governance programs, this approach is directly transferable and helps prioritize investments in shared enablers rather than prematurely locking in rigid rules.
❓ What’s Missing
The analysis is explicitly U.S.-centric and does not engage with EU, UK, or Global South governance models, which limits its immediate applicability in non-U.S. regulatory contexts. The report also remains descriptive rather than evaluative: it surfaces assumptions but does not systematically assess which are most fragile or most in need of policy intervention. Additionally, while the framework is highly useful for policymakers, the report offers limited guidance for private-sector governance teams seeking to operationalize assumption-based analysis internally.
👥 Best For
Policymakers designing or comparing AI governance frameworks, AI governance and policy professionals, regulatory strategists, and researchers seeking a structured way to analyze and compare AI governance proposals without immediately endorsing one model over another.
📄 Source Details
Issue Brief published by the Center for Security and Emerging Technology (CSET), November 2025. Authors: Mina Narayanan, Jessica Ji, Vikram Venkatram, Ngor Luong.
📝 Thanks to
The authors acknowledge a broad group of reviewers from policy, security, and AI governance communities who contributed feedback and critical review, reflecting the report’s strong interdisciplinary grounding.