✒️ Foreword
Most organizations say they are “AI-ready.”
It’s an easy claim to make—and a hard one to prove.
Some have rolled out tools. Others have run trainings. A few have even drafted policies. On paper, it looks like progress. But readiness is not about exposure to AI. It’s about how people actually use it, question it, and take responsibility for its outcomes.
That’s where the illusion starts to crack.
Because using AI is becoming universal—but understanding it is not. Employees are expected to rely on systems they don’t fully grasp, evaluate outputs they can’t easily verify, and make decisions that still carry real consequences. Meanwhile, organizations often mistake activity for capability: more tools, more access, more sessions.
What’s missing is something quieter—and harder to measure. Judgment.
Real readiness shows up in how decisions are made under uncertainty. In whether teams know when to trust a system, when to challenge it, and when to step back entirely. It shows up in how incidents are handled, how risks are surfaced, and whether governance lives inside everyday workflows—or sits beside them.
And most organizations aren’t there yet—not because they lack technology, but because they haven’t built the habits to use it well.
So the question isn’t whether we’ve adopted AI.
It’s whether we’ve learned how to live with it.
— Kuba
Curator, AIGL 📚
☀️Spotlight Resources

AI Literacy, Explained (and Operationalised)
What it is: A 2025 whitepaper by CFTE outlining what AI literacy means and how organisations can implement it across their workforce.
Why it’s worth reading: This report moves beyond vague definitions and breaks AI literacy into practical components—like understanding AI concepts, critically evaluating outputs, and recognising ethical risks. It argues that most employees (around 85%) will use AI rather than build it, making organisation-wide literacy essential—not just technical expertise.
A key takeaway: “AI literacy is the ability to understand, evaluate, and confidently use AI technologies… with critical thinking and ethical responsibility.”
The paper also highlights a common failure mode—treating AI training as a box-ticking exercise instead of building real understanding—leading to bias, poor decisions, or compliance risks.
Best for: Leaders, compliance professionals, and L&D teams designing AI upskilling programs—or anyone trying to translate “AI literacy” into something actionable.

From Principles to Practice: A Real-World AI Ethics Playbook
What it is: A Capgemini AI Futures Lab guide outlining how organizations can design and operationalize AI ethics governance frameworks.
Why it’s worth reading: The document moves beyond abstract principles and focuses on implementation—showing how ethics should be embedded into operating models, risk management, and decision-making processes. It emphasizes that ethical AI is “not a luxury” but a necessity due to rising risks, including bias, lack of explainability, and unintended societal harm . A key insight is that organizations should not adopt generic principles but develop their own, grounded in values and informed by tools like SWOT analysis across technological, psychological, and geopolitical dimensions . The guide also highlights the evolving role of AI ethicists—not as decision-makers, but as facilitators of risk awareness and accountability.
Best for: AI governance leads, risk and compliance teams, and organizations moving from AI principles to practical implementation.

208 Ways to Think About AI Risk (Peregrine Report 2025)
What it is: A 2025 report by Maximilian Schons, Samuel Härgestam, Gavin Leech, and Raymund Bermejo compiling 208 expert-proposed interventions to reduce AI risk, based on interviews with leading AI organizations and policymakers.
Why it’s worth reading:
Instead of offering a single framework, the report builds a structured “menu” of 208 initiatives across eight domains—from technical alignment and auditing to governance, international coordination, and crisis preparedness. What stands out is the urgency: contributors assume transformative AI could arrive within a few years, pushing for fast, practical interventions over long research cycles.
The synthesis of 48 expert interviews highlights four recurring constraints—readiness, coordination, standardization, and capacity—arguing that execution speed and ecosystem alignment matter as much as technical solutions.
Best for:
AI governance professionals, policy advisors, and funders looking for a concrete, idea-rich map of where to act—rather than another high-level discussion of AI risk.