📘 What’s Covered
The two-part workbook series, AI Sustainability in Practice, is part of the Alan Turing Institute’s broader “AI Ethics and Governance in Practice” programme. It combines theoretical grounding with hands-on tools to support sustainability across the AI lifecycle.
Part One: Foundations for Sustainable AI Projects
Introduces the SUM Values—Respect, Connect, Care, and Protect—as guiding ethical pillars drawn from bioethics and human rights.
- Explains how these values map onto common AI risks like bias, autonomy erosion, or algorithmic harm.
- Provides a practical method called the Stakeholder Engagement Process (SEP) for scoping risks and engaging communities.
- Includes tools like a Project Summary Report template and step-by-step activities for stakeholder analysis, positionality reflection, and engagement planning.
- A fictional urban planning case study serves as a unifying thread throughout the exercises.
Part Two: Sustainability Throughout the AI Workflow
- Focuses on operationalising the SUM Values across project phases—from data sourcing and model testing to deployment and maintenance.
- Introduces the concept of a Stakeholder Impact Assessment (SIA), enabling iterative review of ethical risks and benefits.
- Discusses sustainability in procurement and contracting, proposing that public bodies make SUM Values explicit in RFPs.
- Offers engagement techniques tailored to different levels of stakeholder agency: inform, consult, partner, and empower.
- Uses visual diagrams and checklists to support facilitators running ethics workshops in the public sector.
The material is deeply grounded in UK public sector contexts but generalisable to other organisations focused on participatory AI design.
💡 Why it matters?
This is one of the most comprehensive and actionable guides available for operationalising sustainability in AI. Unlike abstract ethics codes, it integrates social theory, risk analysis, and engagement strategy into hands-on tools. The focus on stakeholder inclusion, positionality, and real-world harms gives it substance. It doesn’t just ask whether an AI system is fair or safe—it teaches teams how to ask that question with the communities affected.
⚠️ What’s Missing?
While the methodology is strong, there’s limited guidance on how to align this process with technical workflows in agile or DevOps settings. For teams outside government or academia, some of the references to UK laws or public procurement norms may require adaptation. There’s also little discussion on how to reconcile trade-offs between competing values when they arise in practice—especially under time or political pressure.
🎯 Best For
Ideal for public sector teams, ethics officers, procurement leads, and facilitators designing participatory workshops. Also valuable for NGOs or consultancies advising governments on responsible AI. Less directly applicable to high-speed commercial product teams.
📚 Source Details
- Authors: David Leslie et al., Ethics Team at The Alan Turing Institute
- Publication: 2023, UKRI-funded
- Citations: Draws on UK public policy, Equality Act 2010, ECHR, bioethics, and real-world AI failures
- Access: turing.ac.uk/ai-ethics-governance