Lean meets Data & Generative AI
Lean meets Data & Generative AILean meets Data & Generative AI.pdf797 KBdownload-circle What’s Covered? The paper is built
Lean meets Data & Generative AILean meets Data & Generative AI.pdf797 KBdownload-circle What’s Covered? The paper is built
This resource takes a close look at one of the most cited — and least consistently defined — goals in responsible AI: explainability.
This resource breaks new ground by tackling a blind spot in model lifecycle management: the phenomenon of “AI aging.” The authors propose that temporal degradation is distinct from known issues like concept drift.
This is the book you’d hand to someone serious about understanding AI risk but unsure where to start. With clarity and precision, it lays out how AI could cause major harm—through misalignment, misuse, or sheer scale—and what we can do about it.
Aimed at helping technical and policy audiences evaluate privacy guarantees in practice, NIST SP 800-226 offers tools to reason about parameters, algorithms, and trust assumptions in differentially private systems.
This report lays out a practical framework for evaluating US open-source AI policy through both ideological and geopolitical lenses. It avoids hype and polarization.
This research offers a crisp, nuanced breakdown of what Article 14 AI Act really demands from human oversight—moving beyond vague calls for “humans in the loop.” It highlights the challenges of effectiveness, the shared roles of providers and deployers, and why human oversight is no silver bullet.
These model clauses aim to operationalize the EU AI Act for public sector AI procurement. They provide contracting authorities with a pre-structured set of legal and technical expectations covering the lifecycle of high-risk AI systems.
A practical framework by Credo AI that helps enterprises filter and evaluate foundation models using context-specific trust scores. It introduces “Model Trust Scores” to guide business-informed decisions about AI adoption across capabilities, safety, cost, and latency.
This policy brief outlines a structured approach for collaboration between the EU AI Office and the UK AI Safety Institute (AISI), proposing a practical framework based on four types of institutional engagement: collaboration, coordination, communication, and separation.
India’s new AI Competency Framework equips public sector leaders with the behavioural, functional, and domain-specific skills to responsibly integrate AI in governance. Anchored in the IndiaAI Mission, it marks a major step toward building ethical, capable leadership for AI-driven transformation.
This 2025 report from the European Commission’s Joint Research Centre shows that human oversight isn’t a silver bullet against discrimination in AI-aided decisions. In hiring and lending experiments, human biases often reinforced, revealing serious gaps in current oversight assumptions.
UNIDIR’s “Governance of Artificial Intelligence in the Military Domain” policy brief identifies six priority areas for responsible AI use in defence, rooted in multi-stakeholder input. It supports global cooperation efforts ahead of REAIM 2024.
“AI Safety in Practice” equips teams with the concepts, tools, and workshop activities needed to build safer AI systems. It breaks down technical safety into four practical objectives and shows how to embed them across the AI lifecycle.
“AI Explainability in Practice” by The Alan Turing Institute offers a practical, activity-based approach to embedding explainability into public sector AI projects. It’s part of a broader ethics workbook series supporting responsible, transparent, and accountable use of AI in government settings.
A practical blueprint for fairer, more balanced B2B data-sharing and cloud contracts in the EU. The Expert Group distills key concerns into actionable guidance aimed at leveling the playing field between powerful cloud providers and smaller business users.
Published by FS-ISAC in February 2024, this guide offers a customizable framework for evaluating the risks of generative AI vendors. It supports financial institutions in assessing GenAI products and services as part of broader third-party risk management programs.
Published by the Institute for Security and Technology in March 2025, this report outlines 39 practical strategies—22 technical and 17 policy-based—for AI developers and users to prevent institutional, procedural, and performance failures across the AI system lifecycle.
This checklist is a practical tool for conducting end-to-end socio-technical audits of AI systems. Commissioned by the EDPB, it provides a hands-on methodology for identifying risk, bias, and compliance gaps across the AI lifecycle—from data handling to deployment.
This report from CIPL explores how privacy-enhancing technologies (PETs) can support responsible AI development. It walks through real-world use cases, technical trade-offs, and recommendations for deployers, policymakers, and regulators aiming to balance data utility with individual privacy.
Adversarial ML is no longer just a research topic—it’s a real-world problem. This NIST report gives public and private sector orgs a shared vocabulary to talk about attacks and defenses. Perfect for aligning security, AI, and policy teams around common ground.
This research paper breaks down how European AI standardization efforts interact with the upcoming EU AI Act. It digs into the role of harmonized standards, challenges in their development, and how different stakeholders—regulators, industry, and technical bodies—can align.
AI Fairness in Practice is a workbook published by The Alan Turing Institute as part of its AI Ethics and Governance in Practice programme. It offers a practical, public sector-focused guide to identifying, mitigating, and managing bias across the AI development lifecycle.
This third draft of the General-Purpose AI Code of Practice sets out voluntary commitments to help general-purpose AI model providers—especially those facing systemic risk—meet their legal duties under the EU AI Act.
Curated Library of AI Governance Resources