🧠What’s Covered
Foundations of Accountability
The workbook introduces accountability as a governing principle across AI design, development, and deployment. It focuses on two components:
- Answerability: Ensuring human responsibility is clearly attached to all stages and decisions.
- Auditability: Creating records and transparency that allow internal and external oversight.
These are translated into practices like traceability, justification of decisions in understandable language, and documentation.
Anticipatory vs. Remedial Accountability
The guide separates accountability into two key types:
- Anticipatory (ex-ante): Planning and governance prior to AI deployment.
- Remedial (ex-post): Addressing consequences, explanations, or harms after deployment.
This distinction is crucial for setting proactive governance rather than relying solely on retroactive fixes.
The Process-Based Governance (PBG) Framework
One of the workbook’s central innovations is the PBG Framework and Log, a structured, tabular method to track:
- Governance actions
- Roles and responsibilities
- Timing and documentation
- Mapping to project phases (design, development, deployment)
It uses tools like the Stakeholder Impact Assessment, Data Factsheet, Explainability Assurance Management (EAM), and Bias Risk Management, promoting holistic AI governance.
Hands-On Exercises and Case Study
The workbook includes a fictional case study—AI EduTech, a school district deploying an AI-powered educational platform. Participants are guided through structured activities:
- Identifying accountability gaps
- Mapping risks and governance actions
- Assigning team roles
- Completing accountability maps
The structured workshop approach is designed for public sector practitioners but can also support private orgs or civic tech actors.
đź’ˇ Why it matters?
Too many AI ethics frameworks stop at principles. This workbook provides something different: a scaffolded, concrete method to build accountability into day-to-day AI work. The Process-Based Governance model is particularly useful in sectors where decision-making must be transparent and traceable—such as education, health, policing, and public service delivery.
It also helps fill the “accountability gap” in complex multi-actor AI ecosystems by assigning and documenting roles at every lifecycle stage.
🔍 What’s Missing?
- Private Sector Adaptation: While adaptable, the content is clearly targeted at UK public sector teams. More guidance on translating this to SMEs or multinationals would expand its utility.
- Global Norms Integration: There’s limited explicit discussion of how this maps onto global standards (e.g., OECD AI Principles, ISO 42001).
- Automation and Tooling: The workbook is manual and paper-heavy. While the digital companion is referenced, it would benefit from integrated templates or software toolkits (e.g., Notion dashboards, Git templates).
- Cross-principle integration: The separation into different workbooks (e.g., fairness, safety, explainability) means teams must synthesize materials themselves when working across overlapping domains.
👍 Best For
- Public sector AI teams implementing or procuring high-impact AI systems
- Ethics champions tasked with developing internal AI governance training
- AI project managers building documentation processes
- Policy researchers and civil society seeking model practices to recommend
📚 Source Details
Title: AI Accountability in Practice: Facilitator Workbook
Authoring Body: The Alan Turing Institute – Public Policy Programme
Version: 1.2
Year: 2024
License: CC BY-NC-SA 4.0
Link: aiethics.turing.ac.uk