AI Governance Library

Ahead of the Curve: Governing AI Agents Under the EU AI Act

“If [AI agents] live up to their promise, they may soon become highly capable digital coworkers or personal assistants… moving AI ‘through the chat window and into the real world.’” — p. 7
Ahead of the Curve: Governing AI Agents Under the EU AI Act

🧩 Quick Summary

This landmark policy report sets out the most detailed interpretation yet of how AI agents are covered by the EU AI Act. It dissects the regulatory implications for AI systems built on general-purpose models with systemic risk (GPAISRs), introduces a practical value-chain approach to accountability, and proposes a four-pillar governance framework: risk assessment, transparency tools, technical deployment controls, and human oversight.

📚 What’s Covered

  • State of AI Agents: Trends, capabilities, market dynamics, and the rise of GPAISR-powered agents (e.g. Claude, Operator).
  • Risk Amplification: Autonomous planning + real-world actions = new harms (e.g. multi-agent collusion, manipulation, misuse).
  • AI Act Applicability: Agents as systems, GPAI systems, and high-risk systems depending on context and provider intent.
  • Value Chain Governance: A granular breakdown of roles for model providers, system providers, and deployers.
  • Compliance Measures: Operationalisation of AI Act obligations through 10 sub-measures (e.g. agent IDs, shutdown protocols, permission scaffolding).

💡 Why it matters?

This is the first truly comprehensive regulatory mapping of “agentic” AI to the EU AI Act—and it’s done with rigor. It gives policymakers, developers, and auditors a shared vocabulary to address accountability fragmentation in a fast-moving space. The governance matrix alone (p. 29–30) should become a go-to reference for internal compliance teams. Crucially, the paper doesn’t just analyze—it proposes practical tooling (e.g., agent cards, checkpoint systems, agent-specific red-teaming protocols).

🕳️ What’s Missing

  • Legal certainty gaps: While it flags open questions (e.g., when agents are high-risk GPAI systems), the analysis hinges on interpretations of “intended purpose” that still need clarification from the AI Office or courts.
  • Limited coverage of physical agents: The report focuses heavily on software-based agents and virtual environments. Robotics use cases are underexplored.
  • Real-world adoption barriers: Discussion of SME capacity, enforcement burdens, or cost implications is minimal.

👥 Best For

  • EU compliance officers preparing AI agent audits
  • Tech policy teams shaping Annex III updates
  • AI researchers designing multi-agent systems with deployment in mind
  • Procurement leads needing clarity on agent risk classification

📝 Source Details

TitleAhead of the Curve: Governing AI Agents Under the EU AI Act

Authors: Amin Oueslati & Robin Staes-Polet

Organization: The Future Society

Date: June 2025

Linkthefuturesociety.org

🙏 Thanks to Amin, Robin, and The Future Society for this timely and ambitious contribution.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.