⚡ Quick Summary
This updated July 2024 edition of the Key Terms for AI Governance by the IAPP expands the original 2023 glossary with clearer, broader, and more nuanced definitions. It addresses the growing complexity of the AI landscape by providing a common vocabulary for legal, technical, and policy professionals. The glossary defines over 100 terms—ranging from foundational (e.g., algorithm, machine learning) to advanced governance concepts (e.g., contestability, impact assessment, AI assurance). Unlike general-purpose glossaries, this one is tailored to AI governance, emphasizing both ethical imperatives and regulatory alignment. The result is a usable, cross-functional tool for institutions navigating AI risk, compliance, and oversight.
🧩 What’s Covered
The glossary is arranged alphabetically and covers key concepts from both technical AI design and governance practice. It includes:
- Governance-centric terms like accountability, transparency, fairness, contestability, AI audit, impact assessment, and trustworthy AI, all of which are defined with attention to their legal, ethical, and operational implications.
- Technical AI concepts such as neural networks, transformer model, fine-tuning, diffusion model, reinforcement learning with human feedback (RLHF), hallucinations, and ground truth—with explanations that bridge engineering and regulatory perspectives.
- Risk and safety terminology including adversarial attacks, data poisoning, red teaming, bias, overfitting, underfitting, and robustness, enabling more precise conversations about AI reliability and security.
- Data lifecycle terms such as training data, testing data, validation data, data provenance, synthetic data, and pre/post processing—critical for understanding dataset impacts on fairness and accuracy.
- Emerging areas including foundation models, generative AI, small language models, multimodal models, watermarking, and model/system cards, reflect rapid advancements in generative AI and associated governance challenges.
Each entry provides concise yet layered definitions—frequently cross-referenced (e.g., linking bias to fairness and input data)—to promote contextual understanding. Importantly, the glossary avoids hype, focusing instead on substance, including when terms are theoretical (e.g., AGI) or controversial (e.g., hallucinations).
💡 Why it matters?
The absence of shared definitions in AI governance leads to misaligned expectations, inconsistent regulation, and ineffective compliance strategies. This glossary offers a critical tool for bridging gaps between technologists, policymakers, and legal teams. It helps clarify ambiguous or overloaded terms like transparency, trustworthy AI, or AI governance—concepts frequently used in policy debates but rarely unpacked. For practitioners drafting internal policies, regulatory responses, or assurance frameworks, this glossary reduces interpretative risks and enhances cross-functional alignment. As regulatory regimes like the EU AI Act or NIST AI RMF evolve, precise language becomes a prerequisite for responsible implementation—and this resource offers just that.
❓ What’s Missing
While the glossary succeeds in breadth and clarity, it stops short of contextual commentary. Terms like fairness or explainability are defined descriptively but without detailing trade-offs between different fairness metrics or regulatory interpretations across jurisdictions. It also omits guidance on operationalizing these terms—for example, how AI assurance might be conducted under ISO 42001 or how contestability could be encoded in UX. Some terms (e.g., alignment, capability control) that are central to current AI safety discourse are absent. Lastly, no visual diagrams or use-case vignettes are included, which could enhance practical application.
👥 Best For
- AI governance professionals developing internal policies
- Legal and compliance teams preparing for regulation
- Technical leaders aligning AI development with ethics standards
- Researchers standardizing terminology across papers
- Policymakers and regulators drafting or interpreting AI legislation
- Educators in AI law, ethics, or governance courses
📄 Source Details
Title: Key Terms for AI Governance
Publisher: International Association of Privacy Professionals (IAPP)
Release Date: Updated July 2024 (original: June 2023, prior update: October 2023)
Authors: IAPP Staff with input from external experts
Length: 13 pages
Link: Find the latest version at iapp.org/resources
📝 Thanks to
The IAPP team and contributors to the AI Governance Center who continue to clarify foundational concepts in an increasingly complex landscape.