⚡ Quick Summary
This paper outlines actionable governance strategies for organizations preparing to implement the EU AI Act. Drawing from interdisciplinary research and practical case insights, it distinguishes between compliance and governance, and calls for AI-specific roles, internal escalation routes, stakeholder engagement, and strong institutional memory. It’s a guide for public and private actors aiming to go beyond checkbox compliance.
🧩 What’s Covered
The authors propose a governance-centered approach to the AI Act, focusing on long-term institutional resilience and meaningful risk management over one-off compliance efforts. Key components include:
1. Defining Governance vs. Compliance
- Governance = embedding values, processes, roles, and escalation routes
- Compliance = ensuring obligations are formally met
- Warns that AI Act implementation may become overly legalistic or risk-averse without active governance framing
2. Five Governance Design Areas
The report introduces a practical blueprint for AI Act implementation with five core design elements:
- Internal Role Assignment & Resourcing
- Clearly defined responsibilities (AI compliance officers, technical staff, risk owners)
- Dedicated teams or cross-functional task forces
- Avoid overburdening DPOs or IT leads
- Institutional Memory & Knowledge Management
- Encourage transparent documentation of previous assessments
- Maintain traceability of decision-making and risk rationales
- Invest in tool-supported memory retention (e.g. internal wikis, structured templates)
- Escalation Routes & Ethics Boards
- Formalized mechanisms for risk flagging and override authority
- Create spaces where technical, legal, and social concerns can surface early
- Recommend standing committees or ethics review bodies with real power
- Stakeholder Engagement
- Institutionalize civil society input and user feedback
- Consider impacted individuals, not just affected users
- Go beyond GDPR-style notice-and-comment toward collaborative governance
- Enforceable Accountability Structures
- Clarity on when and how governance failures are addressed
- Mechanisms for remediation and continuous improvement
- Risk of “compliance theatre” if no real accountability exists
3. Public Sector as AI Governance Laboratory
- Governments and public bodies should lead by example in developing governance templates
- Public agencies have stronger transparency obligations and public legitimacy
- Suggest open-sourcing governance structures and templates for wider reuse
4. Risk Classification is Governance-Dependent
- Risk assessments are not purely technical—they require value-laden judgments
- Governance structures should influence how risk thresholds are interpreted and operationalized
5. Accountability Culture Over Formalism
- Warns against purely procedural responses to AI Act compliance
- Advocates for bottom-up trust, organizational reflection, and empowered dissent
💡 Why it matters?
This paper provides the clearest bridge yet between AI policy and institutional governance design. It’s not about how to check the box, but how to build the systems that decide what goes in the box. As the AI Act moves toward enforcement, governance quality will determine whether compliance efforts are effective—or brittle and reactive.
❓ What’s Missing
- No implementation templates or tooling suggestions (e.g. maturity models, org charts, risk registry examples)
- Governance in SMEs or resource-constrained environments is mentioned but not deeply explored
- Doesn’t map recommendations directly to specific AI Act articles—so readers must connect them manually
- The role of third-party auditors or harmonised standards in shaping governance isn’t addressed
👥 Best For
- Public bodies preparing to lead by example in AI Act implementation
- AI governance professionals designing cross-team coordination models
- Legal and compliance leads who want to prevent their orgs from defaulting into checkbox thinking
- Civil society and advocacy groups looking to influence internal governance setups
- Technical teams who want a voice in risk assessment decisions, not just implementation
📄 Source Details
- Title: AI Act Governance: Best Practices for Implementing the EU AI Act
- Authors: Matthias Spielkamp (AlgorithmWatch) & Mark Coeckelbergh (University of Vienna)
- Published: 2024
- Commissioned by: Bertelsmann Stiftung
- Length: 19 pages
- License: CC BY 4.0
- Affiliations: AlgorithmWatch is a key civil society voice in European AI policy
- Download: algorithmwatch.org
📝 Thanks to Spielkamp, Coeckelbergh, and the team at AlgorithmWatch for putting governance—not just law—at the center of AI Act implementation.