AI Governance Library

Governing AI Agents: Cascading Risks, Coordinated Action

Agents don't just generate content; they act, adapt and influence digital and physical environments across extended time horizons, with compounding and potentially irreversible consequences.
Governing AI Agents: Cascading Risks, Coordinated Action

⚡ Quick Summary

This working paper by Amlan Mohanty explores how agentic AI systems fundamentally reshape the risk landscape of AI governance. Instead of focusing on static risks, it introduces a dynamic “cascading risk” perspective—showing how failures can propagate across individuals, organisations, and even geopolitical systems. The paper argues that existing governance frameworks are structurally misaligned with agentic systems, which combine autonomy, tool use, adaptability, and long-term action. It identifies five unresolved governance tensions (liability, oversight, access, cross-border issues, and alignment) and proposes six policy directions. The core message is clear: governance must evolve from model-centric to system- and impact-centric thinking—urgently.

🧩 What’s Covered

The paper is structured as a forward-looking governance analysis grounded in real-world trajectories and expert interviews. It begins by identifying five defining features of agentic systems—autonomy, tool use, multi-step execution, influence, and adaptability—and links each to specific governance challenges. This is one of its strongest contributions: moving beyond abstract definitions toward functional characteristics.

A central concept is the cascading risk framework, which reframes AI risk as dynamic and multi-layered. Instead of asking “what can go wrong,” the paper asks “who is impacted and how harm propagates.” It maps risk across five levels: individual, organisational, national, societal, and geopolitical. The retail banking case study (pp. 19–20) illustrates how a single vulnerability in one agent can trigger a chain reaction—fraud, system compromise, cross-border data leakage, and geopolitical tension.

The paper also explores near-term scenarios: agent swarms, multi-agent ecosystems, synthetic media dominance, and AI-enabled cyber operations. These are not speculative but grounded in current deployment trends.

On governance, it identifies five unresolved issues:
– unclear liability across the value chain
– diminishing feasibility of human oversight
– expanding system and data access risks
– lack of cross-border governance mechanisms
– persistent alignment challenges

It concludes with six policy directions, including holistic risk assessment, voluntary commitments, standards development, observability (e.g. provenance tracking), user empowerment, and international coordination.

💡 Why it matters?

This paper shifts the conversation from “AI systems as tools” to “AI systems as actors.” That shift is critical for governance.

Most existing frameworks—including the EU AI Act—assume bounded, use-case-based risks. This paper shows why that assumption breaks down in agentic contexts. When systems act autonomously across time, tools, and environments, risk is no longer local—it becomes systemic and compounding.

The cascading risk model is especially valuable for practitioners. It provides a mental model that aligns with how real failures unfold in complex systems (similar to financial contagion or supply chain disruptions). This makes it highly applicable for risk managers, regulators, and enterprise governance teams.

Equally important is the emphasis on observability over explainability—a pragmatic shift that reflects real operational needs.

❓ What’s Missing

The paper is conceptually strong but less operational in parts.

It stops short of translating its frameworks into concrete implementation tools (e.g. metrics, audit templates, or governance workflows). While it recommends observability and standards, it does not provide actionable guidance on how organisations should implement them today.

There is also limited engagement with existing regulatory instruments (e.g. EU AI Act, ISO 42001) beyond high-level references. A more explicit mapping between the proposed framework and current compliance obligations would strengthen its practical utility.

Finally, while the cascading risk model is compelling, it would benefit from additional sector-specific case studies beyond banking.

👥 Best For

AI governance professionals designing enterprise frameworks
Policy advisors working on next-generation AI regulation
Risk and compliance leaders dealing with AI deployment
Researchers exploring agentic systems and systemic risk
Advanced practitioners moving beyond model-level governance

📄 Source Details

Amlan Mohanty, Associate Fellow at the Centre for Responsible AI (CeRAI), IIT Madras; Working Paper, March 2026

📝 Thanks to

Amlan Mohanty and the Centre for Responsible AI (CeRAI) for a timely and sharply framed contribution to agentic AI governance.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.