The NSW AI Assessment Framework
The AI Assessment Framework is a practical tool for assessing the ethical and risk considerations of AI systems used by the NSW Government.
The AI Assessment Framework is a practical tool for assessing the ethical and risk considerations of AI systems used by the NSW Government.
Customers acknowledge the need to secure AI systems but simply do not know how.
Organizations that measure the value of AI ethics could be a step ahead. Our holistic AI ethics framework considers three types of ROI.
Quickly gauge your organization’s current maturity across AI discovery, risk management, and compliance.
Without a shared understanding of how bias enters and operates in AI systems, law enforcement agencies risk embedding discrimination into everyday operations.
Frontline users will need a high degree of discretion over how they use AI assistants. But this must be matched with rigorous oversight and clear internal boundaries.
AIME distils key principles from existing AI regulations, standards and frameworks to provide an accessible starting point for organisations to assess and improve their AI management systems.
The Agentic Oversight Framework ensures agents are contained and embedded into a secure environment that meets institutional requirements for data handling, oversight, and auditability.
Governance is not compliance. It is about enabling organizational culture, practice, and accountability to ensure that the values and rules enshrined in the AI Act are meaningfully realized.
Scaling governance means more than more rules—it means better tools. This issue explores AI-assisted GRC, ethics-as-ROI, and governance for agents. Five standout reviews show where oversight is finally catching up with the systems we build.
Risk tiers clarify the harms AI might present and identify the measures being taken to prevent them.
AI Governance by Design (AIGD) integrates ethical, legal, and societal considerations directly into AI system development from inception.
Ethical AI isn’t a cost—it’s a sophisticated financial risk management and revenue generation strategy with measurable, substantial economic returns.
We are releasing our Impact Assessment Template externally to share what we have learned, invite feedback from others, and contribute to the discussion about building better norms and practices around AI.
“Organizations might not even have clear answers to more fundamental questions such as: how does the output of an AI system affect things in practice? Or even—is AI used at all here?”
This checklist maps the NIST AI RMF 1.0 to 58 detailed compliance controls, offering a step-by-step implementation guide for GRC professionals. It includes metrics, control actors, and evaluation techniques—intended as a living document to streamline risk governance.
“If [AI agents] live up to their promise, they may soon become highly capable digital coworkers or personal assistants… moving AI ‘through the chat window and into the real world.’” — p. 7
“AI that is not explainable, controllable, and fair is not trustworthy.” AIGN isn’t just another framework—it’s a toolkit, a movement, and a governance operating system built for our AI-saturated present.
Public institutions must not merely invite public comment—they must share power. Communities must be resourced and authorized to co-govern the design, deployment, and evaluation of AI systems.
ISO 42005 just gave AI impact assessments a formal structure—and a fighting chance at global consistency. Plus: five top-tier governance resources, a format refresh, and a reminder that security can be fun (if you’re defending your bank account from AI prompt hackers).
This practical and comprehensive implementation guide offers a structured, checklist-based framework for organizations seeking to operationalize AI data governance. Rooted
This report argues that traditional, consent-based models for governing AI data reuse are no longer adequate—especially in low- and middle-income countries (LMICs), where power asymmetries and exploitative data practices are common.
This document introduces the ISO/IEC 42005:2025 standard and offers a detailed, visually engaging explanation of how it enables standardized, interoperable AI impact assessments. Written and compiled by Georg Philip Krog.
The EU AI Act Handbook is a comprehensive practical guide that helps legal, compliance, and technical teams understand and prepare for the EU AI Act. It explains the structure of the Act, breaks down obligations by stakeholder role.
Curated Library of AI Governance Resources