🧩 What’s Covered
The report breaks the generative AI value chain into five segments:
- Raw Materials (Compute Infrastructure) – Rights issues in energy consumption, labor in semiconductor manufacturing, and geopolitical risks from chip supply chains.
- Model Development – Training and fine-tuning practices, labor conditions in data labeling, and transparency challenges.
- Model Deployment – API access, misuse risks, and weak accountability when general-purpose models are embedded into third-party apps.
- Product Integration – Human rights risks from downstream tools built on models (e.g. AI writing assistants, image generators).
- End Use – Focus on how users interact with the product and how that affects labor rights, misinformation, bias, and safety.
Each section provides examples of harms, such as:
- Kenyan data workers paid ~$2/hour for content moderation (p. 12)
- Indigenous artists’ work being scraped and imitated by generative models (p. 10)
- Energy-intensive model training contributing to environmental degradation (p. 9)
The report also introduces a “value chain responsibility model”, mapping key actors (like chipmakers, model developers, app builders) to corresponding human rights risks and recommended mitigation levers.
💡 Why it matters?
This report shifts the human rights conversation away from models and towards ecosystems. By doing so, it makes three things clear:
- Accountability doesn’t end with the model developer.
- Rights risks start well before model training (e.g., chip labor) and continue long after release (e.g., content misuse).
- A siloed approach (auditing only outputs or only models) is insufficient.
For those shaping AI governance frameworks, this is an important nudge toward supply chain thinking and shared responsibility—mirroring debates in fashion, agriculture, and mining.
❓ What’s missing?
The report doesn’t go deep into enforcement mechanisms—how to hold actors accountable across such a fragmented chain. There's limited treatment of:
- The role of governments and trade regimes in shaping the chip and compute market.
- Benchmarking examples of how rights audits could be operationalized in generative AI firms.
- Emerging technical proposals (e.g., data provenance tooling) that could support some of the report’s recommendations.
Also, while it names OpenAI, Meta, and Stability AI, more could be done to call out specific governance gaps or progress by these actors.
👤 Best For
This one’s especially useful if you:
- Work in policy, advocacy, or oversight and want to broaden the scope of AI impact assessments.
- Are building rights-aligned due diligence processes for AI companies, especially midstream or downstream tools.
- Want a clear map of who’s responsible for what in the genAI ecosystem—with human rights implications at each stage.
📚 Source Details
Title: A Human Rights Assessment of the Generative AI Value Chain
Authors: Article One and NYU Stern Center for Business and Human Rights
Year: 2024
Supported by: Open Society Foundations
Length: 38 pages