✒️ Foreword
There’s no shortage of AI principles.
Fairness. Transparency. Accountability.
Most organisations can list them, many have published them, and some have even turned them into polished frameworks. On paper, the industry looks aligned.
In practice, it’s a different story.
The real challenge isn’t defining what “good AI” looks like. It’s translating that vision into something teams can actually build, test, monitor, and maintain over time. That’s where things start to break. Principles stay static. Systems don’t.
AI systems evolve. They retrain, integrate with new tools, interact with users in unpredictable ways. Risk shifts with context, scale, and use. Yet governance often remains stuck at the starting line—captured in policies, not embedded in processes.
What’s missing isn’t intent. It’s operational depth.
Turning principles into reality means asking harder questions:
Who owns accountability when a system acts autonomously?
How is risk reassessed after deployment—not just before?
What does “transparency” look like for a system that continuously changes?
These aren’t abstract concerns anymore. They’re design decisions.
The organisations that move forward won’t be the ones with the best principles. They’ll be the ones that can operationalise them—across the full lifecycle, under real-world conditions, with all the messiness that entails.
So the real question isn’t whether your organisation has AI principles.
It’s whether they still matter once the system is live.
— Kuba
Curator, AIGL 📚

☀️Spotlight Resources

A Lifecycle-Based Playbook for Government AI
What it is: A 2026 Australian Government (Digital Transformation Agency) technical standard outlining required and recommended practices for designing, deploying, and managing AI systems across their full lifecycle.
Why it’s worth reading:
The document translates high-level AI ethics principles into concrete, operational criteria—structured as “statements” with required and recommended actions. It takes a lifecycle approach (design → data → train → deploy → monitor → decommission), ensuring governance isn’t a one-off exercise but continuous and iterative.
What stands out is how practical it gets: requirements for auditability, explainability, bias management, and even watermarking AI-generated content to signal provenance and user awareness. The standard also explicitly ties AI use to broader obligations like privacy, cybersecurity, and anti-discrimination law, making it a cross-functional governance tool rather than just a technical guide.
Best for: Public sector teams, AI governance leads, and anyone building operational AI frameworks aligned with real regulatory and lifecycle requirements.

Governing Autonomous AI: A Practical Framework for Agents
What it is: A 2026 framework by Singapore’s Infocomm Media Development Authority outlining governance practices for organisations deploying agentic AI systems.
Why it’s worth reading: The document translates familiar AI principles—like accountability and transparency—into the more complex reality of autonomous, action-taking agents. It structures governance into four areas: upfront risk assessment, human accountability, technical controls, and end-user responsibility. A key insight is how agent capabilities (e.g. access to tools, autonomy levels) directly shape risk exposure, especially when agents can act on real-world systems or interact with other agents. The framework also highlights new failure modes such as cascading errors across multi-agent systems and stresses that continuous monitoring is necessary since not all risks can be anticipated before deployment.
Best for: AI governance leads, risk and compliance professionals, and product teams working with autonomous or multi-agent systems who need a structured, implementation-oriented approach.

Governance Is the Real AI Advantage (CSA Survey 2025)
What it is: A 2025 survey report by the Cloud Security Alliance (with Google Cloud) analyzing how organizations approach AI security, governance, and adoption based on responses from 300 IT and security professionals.
Why it’s worth reading: The report highlights a clear pattern: organizations with mature AI governance are significantly more confident, faster in adoption, and better prepared to manage risk. It shows that only 26% have comprehensive governance in place, yet those that do are more likely to train staff, adopt advanced AI (including agentic systems), and secure deployments effectively. At the same time, most organizations still prioritize familiar risks like data exposure (52%) over newer model-level threats, revealing a gap between traditional security thinking and emerging AI-specific risks.
Best for: AI governance leads, CISOs, and policy professionals who want a data-backed view of where organizations actually stand—and where governance makes the biggest difference.