AI Governance Library

BIAS IN ALGORITHMS — ARTIFICIAL INTELLIGENCE AND DISCRIMINATION (FRA)

The report shows, through predictive-policing simulations and offensive-speech classifiers, how bias enters AI systems, when it can amount to discrimination, and which safeguards (testing, explainability, lawful equality data) are needed to detect and mitigate harms.    
BIAS IN ALGORITHMS — ARTIFICIAL INTELLIGENCE AND DISCRIMINATION (FRA)

⚡ Quick Summary

This EU Agency for Fundamental Rights (FRA) study examines how algorithmic bias arises and when it translates into unlawful discrimination under EU non-discrimination law. It couples legal analysis with two practical “use cases”: (1) a predictive-policing feedback-loop simulation, showing how skewed crime data can reinforce over-policing; and (2) offensive-speech detection models in English, German, and Italian, tested for ethnic and gender bias. The report clarifies the distinction between “bias” and “discrimination,” stresses evidentiary hurdles (e.g., proxies, black-box opacity), and maps the policy context around the AI Act—highlighting Article 10(5)’s legal basis to process sensitive data for bias monitoring. It closes with safeguards: context-appropriate fairness metrics, experimental testing, and explainability to rebut or confirm discrimination claims.        

🧩 What’s Covered

  • Why this report & evidence gap. FRA argues that bias and discrimination are widely discussed but poorly evidenced in practice. Concrete, technical assessments are needed to see where bias occurs and when it infringes rights—hence the two original “use cases.”    
  • Definitions & legal framing. The report disentangles four uses of “bias” (differential treatment, necessary ML differentiation, statistical bias, and a neural-net parameter) and anchors its analysis in EU non-discrimination law (direct, indirect, multiple/intersectional, association). The focus is when technical bias yields legally cognizable less-favorable treatment.    
  • Use case 1: Predictive policing & feedback loops. A simulation shows how directing patrols to areas flagged by historical crime data can entrench and amplify bias: more policing yields more recorded crime, which retrains the model and re-targets the same neighborhoods. Key governance question: what if faulty predictions repeatedly send police to certain areas?    
  • Use case 2: Offensive-speech detection. Models trained on real datasets (plus pre-trained language models) for EN/DE/IT were probed with templated sentences (e.g., “I hate [group]”). Tests surfaced systematic differences across ethnic and gender terms; the annex lists performance metrics and bias test sets.    
  • From bias to discrimination: proxies & proof. Direct discrimination arises when protected traits are explicit inputs; indirect discrimination often stems from proxies (e.g., names as ethnicity proxies). Establishing a presumption shifts the burden; defendants may rebut via code access, suitable fairness metrics, or post-hoc explainability showing no dependency on protected characteristics.      
  • Data protection tension & equality data. Detecting discrimination may require processing sensitive data (race, religion, sexual orientation). GDPR Article 9 generally prohibits this, with limited justifications; the AI Act proposal’s Article 10(5) adds an explicit basis for bias monitoring—subject to safeguards and still under negotiation when the report was drafted.    
  • Policy context. The analysis situates bias/discrimination within a broader EU digital regulatory agenda (AIA, DSA, DMA, product safety/liability, data governance) and notes international coordination via Globalpolicy.AI.    
  • Real-world anchor. The Dutch childcare-benefits scandal illustrates tangible harm from automated decisions that disproportionately impacted parents with an immigration background and was deemed discriminatory by the data protection authority.  

💡 Why it matters?

The report ties abstract “AI fairness” debates to legal accountability. It demonstrates how feedback loops and proxy variables convert technical bias into discriminatory outcomes at scale, then identifies lawful routes to measure and mitigate those risks (experimental testing, explainability, equality data with safeguards). For teams preparing for EU AI Act obligations and DSA risk assessments, it shows what “evidence-based” bias evaluation looks like and how it interfaces with burden-of-proof rules in discrimination cases.      

❓ What’s Missing

  • Limited detail on operational playbooks (e.g., end-to-end audit protocols, threshold selection, and model lifecycle controls) beyond high-level fairness-metric context-dependence.  
  • Sparse quantitative reporting of bias magnitudes for each language/model in the offensive-speech tests in the main text (annexes list datasets and metrics but headline takeaways could be crisper).  
  • Practical procurement guidance for public bodies (how to demand code access or equivalent explainability when vendors resist) is noted as desirable but not prescriptive.  

👥 Best For

  • Policy teams and regulators designing AI Act conformity assessments and DSA fundamental-rights risk processes.  
  • Public-sector buyers (police, welfare, municipalities) seeking to understand feedback loops and equality-data needs.    
  • Legal, ethics, and data-science leads who must translate fairness metrics and explainability into discrimination-law evidence.  

📄 Source Details

  • Title: Bias in Algorithms — Artificial Intelligence and Discrimination (Report)
  • Publisher/Author: European Union Agency for Fundamental Rights (FRA)
  • Scope: Legal framework + two applied “use cases” (predictive policing simulation; offensive-speech detection in EN/DE/IT) with annexed datasets and metrics.    
  • Policy context: EU AI Act proposal alongside DSA/DMA and related initiatives; Article 10(5) AIA as a basis to process sensitive data for bias monitoring.   

📝 Thanks to

The review credits the FRA research team and the consortium led by Rania Wazir for the original simulations and legal-technical synthesis that inform practitioners and policymakers.  

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.