✒️ Foreword
AI governance is having a bit of an identity crisis.
On paper, we keep treating it like a fresh program: new policies, new committees, new tools, new dashboards. In reality, most teams don’t need “more governance.” They need governance that actually fits—snapped into the management systems they already run, and tough enough to handle how harm shows up in the wild.
That’s the real shift behind this issue: moving from governance-as-a-checklist to governance-as-architecture.
Architecture starts with integration. If you already operate an ISMS, AI shouldn’t sit beside it like a weird sidecar. It should inherit the same discipline: scope, inventory, change control, third-party oversight, training, audits. But integration also forces the uncomfortable question: what’s different about AI that breaks our usual assumptions? Non-determinism. Data leakage. Shadow use. Adversarial behavior. The fact that “it worked in testing” often means nothing after deployment.
Architecture also needs feedback loops. Not just risk registers and impact assessments, but incident muscle memory: detection, triage, stabilization, documentation, learning, verification—repeated until it’s boring. Because novelty isn’t a one-time event in AI. New use cases, new failure modes, and new attack paths will keep arriving on schedule.
So the challenge isn’t picking the right framework. It’s building a portable governance spine—one that survives new models, new vendors, and new pressure from the business.
What would change tomorrow if your AI governance was designed like critical infrastructure, not a compliance project?
— Kuba
Curator, AIGL 📚
☀️Spotlight Resources

Integrating ISO 42001(AIMS) into existing ISO 27001(ISMS)
What it is: A short “easy guide” by Santosh Kamane (Rivedix Technology Solutions) on integrating an AI Management System (ISO/IEC 42001:2023 AIMS) into an existing ISO 27001 ISMS.
Why it’s worth reading: It lays out a practical, step-by-step path: start by aligning scope (identify AI systems like ML models/LLMs and how they touch ISMS assets), then extend your risk assessment to cover AI risks such as bias, adversarial inputs, shadow AI use, and IP leakage (pages 5–6). It also gets concrete on governance and operations—updating core policies (acceptable use, data classification, third-party risk, change management), defining AI-specific roles (e.g., Model Risk Manager, Ethics Reviewer), and treating models as inventory-managed assets with lineage and version history (pages 6–7).
Best for: ISMS owners, security/compliance leads, and governance teams who already run ISO 27001 and want to extend it to AI with minimal duplication.

The Mechanisms of AI Harm
What it is: An October 2025 CSET issue brief by Mia Hoffmann that analyzes real-world AI incidents from the AI Incident Database to map how AI harm happens in practice.
Why it’s worth reading: Instead of treating “AI risk” as one bucket, it proposes six mechanisms—three intentional (harm by design, misuse, attacks) and three unintentional (AI failures, failures of human oversight, integration harm)—to explain the pathways from deployment to harm. The brief argues that one-size-fits-all mitigation won’t work, and that focusing on model capability as a proxy for risk misses many harms driven by single-purpose systems and deployment context. It also makes a strong case for incident tracking as a governance tool, because new use cases and attack strategies will keep creating novel harms.
Best for: Governance, risk, security, and policy teams who need a practical taxonomy to connect controls to real incident patterns—not just theoretical model risks.

Artificial Intelligence Impact Assessment Tool
What it is: A DTA-provided, 12-section impact assessment tool (plus guidance) for Australian Government teams working on an AI use case, aligned to the AI Ethics Principles and designed to complement existing risk/legislative frameworks.
Why it’s worth reading: It gives a clear workflow: do a threshold assessment first (sections 1–4), including an inherent risk rating; if everything is low, you can conclude early with monitoring plans, but any medium/high risks push you into the full assessment (sections 5–12). It also bakes in practical governance mechanics like role assignment, stakeholder mapping, and formal re-validation when the use case materially changes.
Best for: Public-sector AI owners, risk/governance teams, and delivery leads who need a repeatable, auditable way to decide whether (and how) an AI use case should proceed under the updated AI policy timeline.