✒️ Foreword
AI governance has a collaboration problem disguised as a compliance problem.
Most teams don’t fail because they’re missing a policy. They fail because AI stretches across too many seams: product and legal, security and procurement, data and HR, vendors and deployers, boards and builders. Everyone owns a slice, nobody owns the whole, and the handoffs become the risk.
What I like about the resources in this issue is how consistently they treat governance as a team sport. The board-focused playbook makes a simple but underrated move: it gives leadership a shared set of questions and a common vocabulary, so “responsible AI” isn’t a private conversation between the AI team and compliance. It becomes an organizational stance that can actually steer priorities.
The supply chain lens pushes the same point in a sharper way: you can’t outsource accountability, but you also can’t manage third-party AI risk from a single function. Procurement can’t do it alone. Security can’t do it alone. Risk can’t do it alone. The only workable model is collaborative ownership—clear roles, ongoing monitoring, and a rhythm of escalation that’s agreed before something drifts or breaks.
FRIA is the rights-based version of that lesson. Done well, it’s not a “desktop exercise.” It’s structured collaboration: involving affected groups early, aligning oversight and complaint pathways, and making the deployment decision with more than one set of incentives at the table.
Even the more technical pieces rhyme: control matrices and agent architectures are, at their core, coordination tools—ways to align who controls what, when, and with what evidence.
If governance is the operating system, collaboration is the network layer. The real question is: are teams collaborating by design—or only when something goes wrong?
— Kuba
Curator, AIGL 📚
☀️Spotlight Resources

Responsible AI Policy Development: A GOVERNANCE PLAYBOOK
What it is: A September 2025 playbook from The Hong Kong Chartered Governance Institute (HKCGI) that walks boards, governance professionals, and senior management through building a tailored Responsible AI policy—plus a ready-to-adapt policy template and director briefing questions.
Why it’s worth reading: It makes a useful point upfront: governance isn’t only about containing AI risk—it’s also about avoiding the strategic risk of not adopting AI responsibly (lost competitiveness and talent pull). It then turns principles into operational governance: six core principles (fairness; reliability & safety; privacy & security; inclusiveness; transparency; accountability) and concrete mechanics like a central AI use-case inventory (“You can't govern what you can't see”), AI-specific risk assessment add-ons, lifecycle artefacts (model cards, dataset datasheets), and post-deployment monitoring for drift/hallucinations and security anomalies.
Best for: Company secretaries, GC/compliance, risk leaders, and boards who need a pragmatic starting kit (including a Tier 1 vs Tier 2 policy template and board-ready questions).

Securing AI in the Supply Chain
What it is: A 2025 practical guide from Halbarad Risk Intelligence Inc. that reframes third-party risk management for AI vendors, with a phased assessment roadmap and control areas to review.
Why it’s worth reading: Instead of treating vendor reviews as annual “point-in-time” checkboxes, it argues AI requires continuous monitoring for drift, bias, automated decisions, and adversarial threats across the supply chain (pages 1, 18–22). It breaks assessments into priority levels—starting with model transparency/explainability, AI data privacy, and automated decision oversight (pages 9–14), then moving into fairness testing and regulatory alignment (pages 15–18). A crisp line that captures the mindset shift: AI models are often “black boxes,” creating accountability gaps when decisions must be explained to regulators or customers (page 9).
Best for: TPRM / procurement, risk & compliance, security leaders, and AI governance teams building vendor due diligence for AI-powered services.

A Guide to Fundamental Rights Impact Assessment (FRIA)
What it is: A December 2025 guide by the European Center for Not-for-Profit Law (ECNL) and the Danish Institute for Human Rights (DIHR) on how deployers of high-risk AI systems should run a Fundamental Rights Impact Assessment (FRIA) under the EU AI Act.
Why it’s worth reading: It turns Article 27’s FRIA obligation into an operational process: what to document (use context, frequency, affected groups, risks, human oversight, complaint/governance measures), and how to run it across five phases (planning/scoping → assess & mitigate → deployment decision & public reporting → monitoring → stakeholder consultation). It’s also clear about common failure modes—like “desktop-only” assessments, weak stakeholder engagement, and doing the FRIA after procurement—plus how to avoid them.
Best for: Public authorities and other AI Act “deployers” who need a credible, rights-based FRIA workflow (and teams trying to align FRIA with—but not merge it into—GDPR DPIAs).