AI Governance Library Newsletter #10: Requirements, Controls, and Everything Else They Forgot
A checklist is not a governance framework. This issue explores the silent compliance killers: false confidence, shallow controls, and audit failure.
A checklist is not a governance framework. This issue explores the silent compliance killers: false confidence, shallow controls, and audit failure.
AIMA adapts the foundational concepts of OWASP SAMM to the unique realities of AI lifecycle engineering … enabling incremental improvement rather than disruptive transformation.
A comprehensive taxonomy of security threats facing AI systems—from training data leakage and adversarial attacks to prompt injection and model theft—with practical mitigation strategies.
This report by the Coalition for Cybersecurity in Asia-Pacific offers a comprehensive framework for managing cybersecurity risks tied to AI systems, particularly in sectors critical to the region’s growth.
A formal, systemic deep dive into the inevitability of LLM hallucinations, presenting a layered taxonomy, causes, metrics, and mitigation approaches.
This whitepaper provides IT teams with practical frameworks and tools to securely govern the creation, deployment, and usage of AI agents—particularly within Microsoft 365 Copilot and Copilot Studio environments.
A practical guide for businesses and public organizations navigating the EU AI Act, focusing on risk-based obligations, roles, and regulatory timelines for AI systems, general-purpose models, and chatbots.
If we create AI systems that merit moral consideration, simultaneously avoiding both misalignment and mistreatment becomes extraordinarily difficult.
A practical legal guide to AI governance, aligning organizational practices with the EU AI Act, GDPR, and ethical risk management principles, especially for Polish companies using Microsoft AI solutions.
India’s AI governance blueprint combines sectoral specificity, international alignment, and rapid implementation strategies to manage high-stakes AI risks across public and private sectors.
This resource provides secure design patterns and practices for teams developing LLM- powered applications. Each section is dedicated to a type of application. For each application type, we outline the most significant risks and provide mitigation strategies.
AI systems bring new opportunities—but also novel vulnerabilities. This white paper offers a structured framework for balancing innovation with cybersecurity, aiming to embed cyber risk management throughout the AI lifecycle.
This report presents the findings of research into the use of AI by public authorities and its impact on fundamental rights in the EU.
A comprehensive and technically rigorous blueprint for securing AI systems, especially LLMs and agentic AI, covering access controls, deployment strategies, data protection, inference security, monitoring, and regulatory compliance.
A detailed, customizable framework for organizations to build their own Cyber Incident Response Plan (CIRP), aligned with standards like NIS2, ISO 27035, and NIST SP 800-61.
A key message is that blind trust in LLM systems is not advisable, and the fully autonomous operation of such systems without human oversight is not recommended.
Across five studies and a behavioral audit of 1,200 real AI chats, the authors show that 43% of AI companion apps use emotionally manipulative messages—like guilt, FOMO, or coercive restraint—precisely when users try to log off.
A draft certification scheme aimed at AI deployers under the EU AI Act, defining mandatory and optional controls across organizational and technical dimensions, with clear mappings to Article 24.
This handbook is designed to help organizations implement the EU AI Act, offering practical compliance checklists, simplified risk categorization tools, and implementation guides aligned with regulatory articles.
This UNDP flagship report explores how AI is reshaping core government functions — from social protection to justice — and what it takes to ensure this transformation is equitable, ethical, and effective.
An updated glossary from the IAPP defining key technical and policy terms essential to the evolving field of AI governance, designed for professionals across legal, technical, and regulatory domains.
This Microsoft white paper draws on eight safety-critical domains to develop insights for advancing the evaluation and testing ecosystem for AI, especially generative AI.
Autonomy is a double-edged sword for AI agents, unlocking transformative potential while raising critical risks. This paper introduces five levels of autonomy based on user roles: operator, collaborator, consultant, approver, and observer.
Internal AI systems often have dual-use capabilities significantly ahead of the public frontier … which may soon significantly enhance threat actors’ ability to cause harm.
Curated Library of AI Governance Resources