AI Governance Library

AI Act Guide – Version 1.1 (September 2025)

A practical guide for businesses and public organizations navigating the EU AI Act, focusing on risk-based obligations, roles, and regulatory timelines for AI systems, general-purpose models, and chatbots.
AI Act Guide – Version 1.1 (September 2025)

⚡ Quick Summary

The AI Act Guide – Version 1.1 (September 2025) offers a structured and user-friendly interpretation of the EU AI Act. Produced by the Dutch Ministry of Economic Affairs, it targets both AI providers and deployers—especially SMEs, public sector bodies, and tech firms—by explaining key legal obligations and timelines. The guide begins with a four-step assessment approach: risk classification, AI definition check, actor role identification, and corresponding obligations. Notably, it emphasizes the phased rollout of requirements, with bans on prohibited AI systems effective from February 2025, general-purpose AI model rules from August 2025, and high-risk system compliance mandatory by August 2026/2027, depending on category. It integrates references to the AI Act’s legal provisions while offering clear examples, exceptions, and practical interpretation aids.

🧩 What’s Covered

The guide structures AI Act compliance through four steps:

  1. Risk Categorization:
    • Prohibited AI systems (e.g., emotion recognition in workplaces, social scoring, untargeted facial recognition) are banned from February 2025.
    • High-risk systems include both regulated products (e.g., medical devices) and applications (e.g., education, law enforcement, HR).
    • Exceptions exist for low-impact or auxiliary tools.
  2. AI System Definition:The AI Act defines AI broadly, covering systems that operate with some autonomy and infer outputs like predictions or decisions. Classic rule-based systems with no adaptiveness are excluded.
  3. Actor Role Identification:
    • Provider: Develops or commissions and markets an AI system.
    • Deployer: Uses the AI system under their authority (except for purely private use).Roles determine regulatory responsibilities.
  4. Compliance Obligations:
    • Prohibited practices: Cannot be used or marketed.
    • High-risk systems: Must meet nine core requirements, including risk management, data governance, technical documentation, human oversight, and transparency.
    • General-purpose models (GPAI): Subject to documentation, copyright compliance, and systemic risk safeguards (for models above 10²⁵ FLOPs).
    • Generative AI and chatbots: Must be transparent—users should know when content is AI-generated or manipulated.

Also included:

  • Obligations for public sector deployers (e.g., Fundamental Rights Impact Assessments).
  • Clarifications on what constitutes modifying an AI system (triggering provider-level obligations).
  • Annex references to product legislation that triggers high-risk classification.

💡 Why it matters?

This guide translates the AI Act’s complexity into actionable steps, helping organizations avoid non-compliance, penalties, or reputational risks. It’s particularly valuable during the 2025–2027 transition period, as businesses must classify their systems, determine roles, and prepare documentation. By clearly linking provisions to real-world use cases—such as education assessments, biometric profiling, and insurance pricing—the guide reinforces the AI Act’s risk-based philosophy. It also highlights the challenge of managing general-purpose and foundation models, showing how even open-source deployments may have regulatory consequences. For governments, the obligations go further, introducing fundamental rights assessments and heightened transparency.

❓ What’s Missing

  • The guide omits detailed examples of conformity assessment procedures, which are crucial for compliance clarity.
  • There’s limited guidance on interaction with GDPR, especially where AI involves personal data processing.
  • No sector-specific case studies are included—e.g., for finance, health, or education.
  • It assumes familiarity with EU legal terminology and references but lacks hyperlinks or QR codes to real-time legislative resources.
  • The section on systemic risk in GPAI models could be clearer on enforcement timelines and thresholds.

👥 Best For

  • AI system developers and startups building or adapting for the EU market
  • Legal and compliance teams in regulated sectors (health, finance, public services)
  • Public authorities deploying AI for administrative or law enforcement purposes
  • Policy and AI governance professionals working on internal assessments and audits
  • Providers of general-purpose AI models, especially those approaching systemic scale

📄 Source Details

  • Title: AI Act Guide – Version 1.1
  • Published: September 2025
  • Publisher: Ministry of Economic Affairs, Netherlands
  • Language: English
  • Link (if cited)eur-lex.europa.eu/legal-content/NL/TXT/?uri=CELEX:32024R1689
  • Structure: 4-step guidance, including extensive coverage of prohibited, high-risk, GPAI, generative AI systems

📝 Thanks to

The Ministry of Economic Affairs of the Netherlands for producing one of the most accessible government guides to the EU AI Act to date.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.