📘 What’s Covered
🔹 Foundation of the EU AI Act
The guide introduces the EU AI Act as a risk-based, product-safety-style regulation. It defines the conformity assessment (CA) as the process to ensure high-risk AI systems meet mandatory requirements before market access, and positions CAs as tools for legal accountability.
🔹 Key Dates
- Aug 1, 2024 – Entry into force
- Feb 2, 2025 – Prohibited AI use bans + AI literacy obligations begin
- Aug 2, 2025 – Notified bodies, GPAI rules, and governance provisions kick in
- Aug 2, 2026 – Full application, excluding some high-risk classifications
- Aug 2, 2027 – Final staggered obligations apply (e.g. Art. 6(1))
🔹 Step-by-Step Breakdown of the CA Process
The guide divides the conformity assessment process into four main steps:
✅ Step 1: Do You Need a Conformity Assessment?
This depends on whether the system:
- Is an AI system under AIA Art. 3(1)
- Is high-risk (Annex I or III, or GPAI with systemic risk)
- You are a provider or party acting as one under Art. 25 (e.g., if you rebrand or substantially modify the system)
🧭 Visual Flowchart on page 8 helps identify CA obligations.
✅ Step 2: When Must the CA Be Done?
Before the system is:
- Placed on the market or put into service
- Substantially modified post-market, changing purpose or compliance level
📌 Page 16 includes a decision diagram that distinguishes between internal CA vs. third-party CA based on system category.
✅ Step 3: Who Conducts the CA?
Two CA types exist:
- Internal CA – For most Annex III systems (provider performs self-assessment based on Annex VI)
- Third-party CA – Mandatory for:
- Systems in Annex I or biometric systems in Annex III (if no harmonized standards used)
- Systems used in law enforcement, immigration, or asylum
Table on pages 21–22 outlines which systems require which CA type.
✅ Step 4: How to Comply with High-Risk Requirements
You must document and validate the following:
- Risk Management System (Art. 9) – Ongoing, documented, includes post-market updates
- Data Governance (Art. 10) – Ensure datasets are high-quality, statistically relevant, and bias-mitigated
- Technical Documentation (Art. 11) – Comprehensive records (structure, lifecycle changes, standards used)
- Record-Keeping (Art. 12) – Mandatory logging, especially for biometric systems
- Transparency (Art. 13) – Clear, user-friendly documentation, explainability, and user instructions
- Human Oversight (Art. 14) – Operational controls + trained supervisors
- Accuracy, Robustness, Cybersecurity (Art. 15) – Design for resilience, feedback loops, threat mitigation
Each requirement is cross-referenced with GDPR or the Cyber Resilience Act where applicable. For instance, providers can benefit from presumed compliance if they follow EU cybersecurity schemes under the Cybersecurity Act (p. 37–38).
💡 Why it matters?
This guide translates the legal text of the AI Act into actionable compliance steps for providers and downstream actors. It reflects the final AIA version and helps organizations prepare for enforcement. Particularly valuable is its emphasis on lifelong compliance — not just one-off certification. With the AI Act setting global precedents, understanding conformity assessments is essential for anyone building or deploying high-risk systems in the EU.
🔍 What’s Missing
- No sector-specific implementation examples – For example, healthcare or fintech scenarios aren’t explored.
- Limited procedural detail for sandbox participation – While sandboxes are listed as an alternate compliance route, the setup and approval process aren’t covered in depth.
- No checklist or tools for gap analysis – The guide could be more practical with annexed templates, forms, or audit-ready examples.
- No crosswalk between AIA and other regimes – It briefly mentions GDPR and the Cybersecurity Act but doesn’t deeply map overlaps (e.g., ISO 42001, NIS2, DSA).
👥 Best For
- AI compliance leads or product managers preparing high-risk AI systems for EU markets
- Legal counsel assessing when a CA is triggered or who must perform it
- Notified bodies and auditors preparing for designation
- Companies using GPAI models in high-risk contexts (e.g., employment, biometric verification)
🔗 Source Details
Title: Conformity Assessments under the EU AI Act: A Step-by-Step Guide
Authors: Andreea Serban, Vasileios Rovilos, Katerina Demetzou (FPF alumni)
Published: April 2025
Organizations: Future of Privacy Forum, OneTrust
Pages: 41
URL: https://fpf.org