
AI Literacy - Questions & Answers
A practical and timely Q&A document from the European Commission offering detailed guidance on Article 4 of the EU AI Act, which introduces a legal obligation for AI literacy.
A practical and timely Q&A document from the European Commission offering detailed guidance on Article 4 of the EU AI Act, which introduces a legal obligation for AI literacy.
A comprehensive audit framework developed by The IIA to help internal auditors assess and assure AI governance, risk, and control environments. Updated in 2024 to align with recent advances and standards like NIST AI RMF and large language model use.
A comprehensive, practitioner-friendly checklist covering 100+ AI audit questions across governance, ethics, security, bias, explainability, and compliance. It aligns with standards like ISO 42001, GDPR, and the EU AI Act, making it a strong operational guide for risk assessments.
Anthropic’s guide for enterprises adopting generative AI blends real-world deployment insights with a clear, staged strategy. From AI governance to technical scaling, it outlines how organizations can responsibly and effectively operationalize AI like Claude.
This updated 2025 guide offers a complete breakdown of the EU AI Act’s conformity assessment (CA) process for high-risk AI systems, aligned with the final adopted text. It clarifies when CAs are required, who performs them, and what legal, procedural, and technical requirements must be met.
This practical, executive-level guide helps business leaders and technologists reframe how they approach generative AI—shifting from mere users to true value creators.
This report takes the pulse of global boardrooms and reveals a growing awareness—but slow action—on AI governance. With only 5% of firms fully integrating AI into strategy, Deloitte urges boards to accelerate education, oversight, and alignment with strategic goals.
This expansive report draws on a new survey of 840 enterprises in G7 countries and 167 in Brazil to reveal how firms are using AI, where they struggle, and what kind of public support they value. It’s an evidence-rich resource for shaping practical AI policy.
A compact, practical guide to understanding the most influential cognitive biases in everyday thinking, decision-making, and AI design—plus a bonus chapter on algorithmic bias. A must-read for anyone working at the intersection of technology, governance, and ethics.
Simon Mylius’s Scalable AI Incident Classification showcases a proof-of-concept system using large language models (LLMs) to summarize, classify, and rate AI incidents at scale.
Amlan Mohanty’s report, Making AI Self-Regulation Work, offers a comprehensive framework for deploying self-regulation as a foundational piece of India’s AI governance.
Open source AI is no longer fringe—it’s becoming essential. Based on a global survey of over 700 tech leaders, this report shows how open source AI is reshaping tech stacks, boosting developer satisfaction, and challenging proprietary dominance.
A highly structured guide showing how to apply OWASP’s Agentic AI Threat Taxonomy to real-world multi-agent systems (MAS), introducing the MAESTRO framework to surface layered vulnerabilities and new attack paths unique to agent-to-agent coordination environments.
This issue is about the foundations no one celebrates—but everyone depends on. Solid spreadsheets, clean templates, tools that won’t break under pressure. Plus: models planning backwards, AI apps built in a weekend, and how a book cover can teach you everything about framing AI risk.
The AI Risk Assessment Template provides a structured, highly practical checklist for evaluating AI system risks across development, deployment, and operation phases. It aligns with NIST AI RMF and EU AI Act requirements, aiming to boost trustworthy AI practices.
This week we look at how visual design can make or break AI governance, review the first real playbook for agent oversight, and highlight why Tom Scott’s YouTube channel might teach you more about systems failure than most compliance workshops ever could.
This guide explores how to govern autonomous AI agents—systems capable of planning and acting with minimal instruction. It presents a structured approach to agent risks and interventions, pushing the conversation beyond foundation models toward emergent systems.
This workbook is a facilitator’s guide to delivering AI ethics training across public institutions. It covers AI fundamentals, public sector use cases, and governance models—paired with activities grounded in UK government experience and policy frameworks.
This guidance from New South Wales outlines role-specific responsibilities for implementing responsible AI. It supports public agencies in assigning accountability using ISO-aligned frameworks and practical RACI structures. A useful anchor for everyday governance.
Governance needs clarity—not clickbait. This issue breaks down what makes a good AI governance resource, why Bird & Bird’s AI Act guide sets the bar, and how fake AI cults, Roman GIS maps, and server-blessing priests say more about tech than most frameworks do.
This UK government strategy outlines a six-point digital reform plan focused on service redesign, AI integration, shared infrastructure, and leadership reform. It introduces a new digital centre of government and pushes for transparency, efficiency, and public trust.
The Bird & Bird guide to the EU AI Act offers a deep dive into the Act’s legal obligations, scope, governance model, and technical standards. It walks readers through implementation timelines, roles across the AI value chain, and penalties for non-compliance.
This report, produced under the EDPB’s Support Pool of Experts (SPE) programme, offers structured guidance on managing privacy risks in LLM systems. It lays out risk identification, evaluation, and control strategies tailored to GDPR and AI Act obligations, supporting both developers and deployers.
This chart-based explainer by the IAPP (April 2025) breaks down incident notification and information-sharing obligations under 11 key EU digital laws, helping organisations navigate complex breach scenarios across privacy, cybersecurity, AI, and operational resilience domains.
Curated Library of AI Governance Resources