
Living Repository of AI Literacy Practices v. 31.01.2025
This “living repository” shows how companies are starting to take that seriously—with real, varied, and often creative approaches to staff training.
This “living repository” shows how companies are starting to take that seriously—with real, varied, and often creative approaches to staff training.
Frontier AI is powerful—but how powerful is too powerful? This Berkeley-led paper proposes a framework for defining and managing intolerable risks, pushing governments and industry to stop waiting for disaster and start drawing lines. It’s a toolkit for acting before things go wrong.
This 2025 paper by Iren, Noldus, and Brouwer offers a much-needed guide to how the EU’s AI Act and the Commission’s new guidelines apply to the emotion recognition field—one of the most contentious areas of affective computing.
This white paper from the UAE’s AI Office captures a rare, high-level dialogue on responsible AI, convened at the World Governments Summit.
This landmark report brings together 96 global experts to create the first shared scientific baseline on general-purpose AI risks and safety. It doesn’t recommend policies—it equips governments, researchers, and regulators with what’s known (and what’s not) so far.
This U.S. Copyright Office report maps out the toughest economic questions about AI and copyright, without pretending to have the answers.
Accountability starts with visibility—especially when AI is doing the work.
The Artificial Intelligence Playbook for the UK Government (Feb 2025), created by the Government Digital Service, is the UK’s most comprehensive public guidance for safely deploying AI across government bodies.
The OWASP Top 10 LLM AI Cybersecurity & Governance Checklist (v1.1, April 2024) is a practical guide for organizations planning to use large language models. I
Innovation’s racing ahead. Responsibility’s limping behind. And leadership? It’s stuck in the middle.
This isn’t just another AI hype deck. It’s a grounded framework to help real businesses figure out where to start, what matters, and what to watch out for.
Agentic AI, powered by LLMs, brings new risks that outpace traditional app security models. This guide is a much-needed attempt to slow things down and make sense of what we’re dealing with.
A detailed 5-step framework for evaluating technical safeguards against misuse of advanced AI systems. It calls for clear safeguard requirements, a documented plan, evidence gathering, ongoing assessment, and explicit justification of sufficiency.
This report, published by the Paris Peace Forum’s Strategic Foresight Hub, proposes cyber policy as a blueprint for global AI risk governance. It focuses on adversarial use of AI in cyberspace, offering a scalable model for global coordination and institutional response.
Public authorities can use generative AI responsibly if they follow GDPR. That’s the message behind this clear and practical guide from Sweden’s data protection authority.
This practical guide by Rhymetec walks SaaS and tech companies through ISO 42001—the first international standard focused on AI management systems.
This OECD report proposes a unified framework for AI incident reporting—offering policymakers a timely and globally adaptable tool to track, assess, and learn from harms linked to AI.
This technical report by the Cooperative AI Foundation offers the comprehensive early attempt to map the risks that emerge when multiple advanced AI agents interact, adapt, and evolve together.
Lean meets Data & Generative AILean meets Data & Generative AI.pdf797 KBdownload-circle What’s Covered? The paper is built
This resource takes a close look at one of the most cited — and least consistently defined — goals in responsible AI: explainability.
This resource breaks new ground by tackling a blind spot in model lifecycle management: the phenomenon of “AI aging.” The authors propose that temporal degradation is distinct from known issues like concept drift.
This is the book you’d hand to someone serious about understanding AI risk but unsure where to start. With clarity and precision, it lays out how AI could cause major harm—through misalignment, misuse, or sheer scale—and what we can do about it.
Aimed at helping technical and policy audiences evaluate privacy guarantees in practice, NIST SP 800-226 offers tools to reason about parameters, algorithms, and trust assumptions in differentially private systems.
This report lays out a practical framework for evaluating US open-source AI policy through both ideological and geopolitical lenses. It avoids hype and polarization.
Curated Library of AI Governance Resources