
AI Management Essentials
AIME distils key principles from existing AI regulations, standards and frameworks to provide an accessible starting point for organisations to assess and improve their AI management systems.
AIME distils key principles from existing AI regulations, standards and frameworks to provide an accessible starting point for organisations to assess and improve their AI management systems.
The Agentic Oversight Framework ensures agents are contained and embedded into a secure environment that meets institutional requirements for data handling, oversight, and auditability.
Governance is not compliance. It is about enabling organizational culture, practice, and accountability to ensure that the values and rules enshrined in the AI Act are meaningfully realized.
Scaling governance means more than more rules—it means better tools. This issue explores AI-assisted GRC, ethics-as-ROI, and governance for agents. Five standout reviews show where oversight is finally catching up with the systems we build.
Risk tiers clarify the harms AI might present and identify the measures being taken to prevent them.
AI Governance by Design (AIGD) integrates ethical, legal, and societal considerations directly into AI system development from inception.
Ethical AI isn’t a cost—it’s a sophisticated financial risk management and revenue generation strategy with measurable, substantial economic returns.
We are releasing our Impact Assessment Template externally to share what we have learned, invite feedback from others, and contribute to the discussion about building better norms and practices around AI.
“Organizations might not even have clear answers to more fundamental questions such as: how does the output of an AI system affect things in practice? Or even—is AI used at all here?”
This checklist maps the NIST AI RMF 1.0 to 58 detailed compliance controls, offering a step-by-step implementation guide for GRC professionals. It includes metrics, control actors, and evaluation techniques—intended as a living document to streamline risk governance.
“If [AI agents] live up to their promise, they may soon become highly capable digital coworkers or personal assistants… moving AI ‘through the chat window and into the real world.’” — p. 7
“AI that is not explainable, controllable, and fair is not trustworthy.” AIGN isn’t just another framework—it’s a toolkit, a movement, and a governance operating system built for our AI-saturated present.
Public institutions must not merely invite public comment—they must share power. Communities must be resourced and authorized to co-govern the design, deployment, and evaluation of AI systems.
ISO 42005 just gave AI impact assessments a formal structure—and a fighting chance at global consistency. Plus: five top-tier governance resources, a format refresh, and a reminder that security can be fun (if you’re defending your bank account from AI prompt hackers).
This practical and comprehensive implementation guide offers a structured, checklist-based framework for organizations seeking to operationalize AI data governance. Rooted
This report argues that traditional, consent-based models for governing AI data reuse are no longer adequate—especially in low- and middle-income countries (LMICs), where power asymmetries and exploitative data practices are common.
This document introduces the ISO/IEC 42005:2025 standard and offers a detailed, visually engaging explanation of how it enables standardized, interoperable AI impact assessments. Written and compiled by Georg Philip Krog.
The EU AI Act Handbook is a comprehensive practical guide that helps legal, compliance, and technical teams understand and prepare for the EU AI Act. It explains the structure of the Act, breaks down obligations by stakeholder role.
This report dissects the generative AI ecosystem through the lens of human rights. Instead of focusing only on the models or outputs, it zooms out to examine the entire value chain—covering everything from raw compute and data to the deployment of tools in the market.
The Agentic AI Red Teaming Guide is the most comprehensive hands-on security testing manual yet for red teaming autonomous AI systems. Developed by CSA and OWASP with input from over 50 contributors.
This resource presents one of the most comprehensive, actionable testing frameworks for AI governance in practice today. Developed by Singapore’s Infocomm Media Development Authority (IMDA) in collaboration with the AI Verify Foundation.
This guided template helps public and private sector leaders build structured, role-based AI literacy programs aligned with legal obligations (like Article 4 of the EU AI Act) and governance best practices.
This policy paper zeroes in on the governance challenges posed by increasingly autonomous AI agents—systems capable of initiating and managing multi-step tasks with minimal human intervention.
The Cyber Governance Code of Practice is a government-endorsed framework aimed at medium and large organisations, designed to help boards take ownership of cybersecurity risks. It focuses on aligning cybersecurity with strategic, operational, and cultural goals of the business.
Curated Library of AI Governance Resources