🧭 What’s Covered
The consultation highlights growing tensions around how the AI Act should be interpreted and enforced. Key areas explored include:
1. Defining “AI System” under Article 3
Stakeholders voiced concern that the current wording might unintentionally capture non-AI systems—like rule-based tools and simple statistical models. Terms such as “inference,” “adaptiveness,” and “autonomy” were flagged as needing better definitions. There’s strong demand for guidance that distinguishes between traditional software and actual AI systems, with multiple respondents asking for real-world examples and thresholds of complexity.
2. Prohibited Practices under Article 5
Each sub-section of Article 5 is covered through stakeholder reactions:
- Manipulative Practices (Art. 5.1.a): The vagueness around “significant harm” and “subliminal techniques” causes concern. Calls were made for concrete examples and better alignment with common business practices like marketing.
- Exploitation of Vulnerabilities (Art. 5.1.b): Respondents worry about how to define “vulnerability.” Age, disability, and socio-economic conditions all emerged as areas requiring clearer framing, with ethical and legal tensions highlighted across sectors such as fintech and social media.
- Social Scoring (Art. 5.1.c): Examples from the Netherlands, France, and the UK—like welfare fraud detection tools—were used to illustrate how social scoring systems can cause discrimination. The report shows a consensus on needing to define “unjustified or disproportionate treatment.”
- Crime Risk Prediction (Art. 5.1.d): Tools like COMPAS and PredPol were cited as problematic. Stakeholders urged caution with systems that use demographic or behavioural profiling without human oversight.
- Facial Image Scraping (Art. 5.1.e): Clearview AI and PimEyes were raised repeatedly. There’s widespread concern about lack of consent and the creation of mass surveillance tools through untargeted scraping.
- Emotion Recognition (Art. 5.1.f): Use in workplaces and schools is seen as ethically questionable, with major questions around scientific validity, discrimination, and privacy raised. The report covers both prohibited use cases and debated exceptions (e.g. medical or safety contexts).
- Biometric Categorisation (Art. 5.1.g): The report flags systems that infer race, religion, or political views from biometric data—often without consent. These practices are seen as particularly harmful and poorly defined in the legislation.
- Real-Time Remote Biometric Identification (Art. 5.1.h): Surveillance applications at borders, public events, and city centres prompted calls for better definitions of “real-time,” “publicly accessible space,” and “law enforcement purpose.”
3. Cross-Regulatory Clarity
Many stakeholders called for guidance on how the AI Act aligns with GDPR, the DSA, and consumer protection law. SMEs in particular flagged compliance burdens and requested clearer implementation guidance.
💡 Why it matters?
This report provides rare insight into how the EU’s AI legislation is landing with those who will need to comply with it. The responses are practical, grounded in operational concerns, and clearly show where stakeholders feel the Act is either too vague or too rigid. It reveals fault lines between innovation and regulation, especially in high-stakes sectors like health, policing, and finance.
Understanding this feedback is critical if the AI Office wants the Act to gain legitimacy and lead to meaningful safeguards without stalling the sector. The themes around surveillance, discrimination, and misuse reflect real anxieties that lawmakers can’t ignore.
🕳️ What’s Missing
- End-user voices are underrepresented. The consultation leaned heavily on input from industry players and developers. Only 5.7% of respondents were individual citizens, which may skew the insights away from those most affected by AI.
- No policy conclusions yet. While the report aggregates concerns and suggestions, it doesn’t spell out how the AI Office intends to respond. There’s a need for follow-up documentation that translates these insights into actionable policy or guidance.
- Lack of legal mapping. The document stops short of detailing how other EU frameworks (GDPR, DSA) will interact operationally with the AI Act—something heavily requested by respondents.
✅ Best For
- AI policy professionals looking to understand where legal uncertainty exists in the AI Act.
- Regulators seeking public-facing examples of prohibited AI use cases.
- Industry compliance leads who want to benchmark their concerns with peers.
- Civil society groups preparing to argue for more robust protections under Article 5.
🗂 Source Details
Title: Analysis of EU AI Office stakeholder consultations: defining AI systems and prohibited applications
Author: Centre for European Policy Studies (CEPS)
Commissioned by: European Commission, DG CONNECT (AI Office)
Date: 2025
Length: 58 pages