Structural Tests for AI Systems: 15 Checks for Regulators, Auditors and Compliance Officers
If a system fails even one, it’s structurally out of compliance and regulators can prove it – in minutes.
If a system fails even one, it’s structurally out of compliance and regulators can prove it – in minutes.
An analysis of nearly 460,000 AI model cards shows developers overwhelmingly report technical risks—while real-world harms arise more from misuse and misinformation.
Despite $30–40B invested in GenAI, 95% of businesses see no ROI. The divide is not in adoption—but in impact. This report maps the real reasons behind failure and what separates top performers from stalled efforts.
“88% of agentic AI early adopters are now seeing a positive ROI on gen AI
This roadmap sets out our ambitions for the third-party assurance market in the UK and the immediate actions that government will take to support this emerging sector.
This RAND working paper proposes a framework for identifying tasks that even Artificial General Intelligence (AGI) would be fundamentally unable to perform, grounded in constraints from physics, information theory, and computational complexity.
✒️ Foreword While the U.S. frames AI as a tool for innovation, economic growth, and military advantage, China is building
An interdisciplinary legal-philosophical textbook covering AI’s ethical foundations, regulatory implications, and governance challenges through technical, philosophical, and legal lenses.
A voluntary guide tailored to New Zealand businesses, showing how existing governance, legal, and operational practices can anchor AI risk management—without requiring a ground-up rebuild.
Organizations with a Chief AI Officer (CAIO) see 10% higher ROI on AI—and up to 36% more when using centralized operating models. But only 1 in 4 organizations have a CAIO today.
A detailed, scored assessment of how seven frontier AI developers approach safety—tracking real practices, not just promises, across 33 indicators in six domains.
Security-by-default for AI: practical, lifecycle-wide guidance to help providers build and operate AI systems that resist misuse, protect data, and remain reliable—even under attack.
The AGILE Index ranks countries on their AI governance readiness—not by intent, but by measurable capability, institutional maturity, and actual regulatory power.
A foundational governance model for AI systems that act autonomously, delegate tasks, or interface with external tools—built to handle autonomy, unpredictability, and systemic risk.
A protocol to connect AI models with external tools and data sources through a shared interface, solving the M×N integration problem for developers.
A modular toolkit for lawmakers, researchers, and advocates to shape effective, rights-respecting AI policy—built on global principles, practical levers, and tested language from 40+ jurisdictions.
A policy-first toolkit for identifying and mitigating overlooked or under-prioritized AI risks—especially those affecting marginalized groups, low-visibility use cases, and long-term governance gaps.
Culture is governance’s invisible backbone. This framework helps leaders identify, assess, and embed cultural levers—from values to incentives—that shape responsible AI behavior across organizations.
A lifecycle-driven framework offering 70+ mapped AI-specific risks, actionable safeguards, and crosswalks to ISO 42001, NIST AI RMF, OWASP Top 10 for LLMs, and more – aimed at embedding security into every phase of AI development and deployment
A practical, product-integrated framework for managing AI risks across the ML lifecycle—rooted in Databricks’ tooling and aligned with enterprise data governance priorities.
A structured guide to identifying and mitigating privacy risks in LLMs—covering data leakage, user inference, training data exposure, and strategies for auditability and control.
A detailed legislative framework proposing oversight, obligations, and incentives for AI systems—covering everything from foundation models and open-source exemptions to risk-based licensing and whistleblower protection.
Scaling governance means more than more rules—it means better tools. This issue explores AI-assisted GRC, ethics-as-ROI, and governance for agents. Five standout reviews show where oversight is finally catching up with the systems we build.
A strategic proposal to professionalize AI risk governance.
Curated Library of AI Governance Resources