AI Governance Library

Interoperability in AI Safety Governance: Ethics, Regulations, and Standards

This policy report examines how ethical, legal, and technical interoperability can reduce fragmentation in AI safety governance, drawing on comparative country studies from China, South Korea, Singapore, and the United Kingdom.
Interoperability in AI Safety Governance: Ethics, Regulations, and Standards

⚡ Quick Summary

This UNU policy report tackles one of the hardest problems in global AI governance: how to make different ethical frameworks, regulatory regimes, and technical standards actually work together. Instead of arguing for full harmonisation, it introduces interoperability as a pragmatic “third way” between fragmentation and uniformity. Based on comparative research across four jurisdictions and three high-risk sectors (autonomous vehicles, education, cross-border data flows), the report maps where alignment already exists, where it breaks down, and what can realistically be done to close those gaps. Its strongest contribution lies in translating abstract governance debates into concrete instruments—self-certification, benchmarks, coordinated standards, and institutional mechanisms—that policymakers can operationalise today while staying aligned with UN-level initiatives like the Global Digital Compact.

🧩 What’s Covered

The report starts by defining AI safety governance and interoperability in a precise, multi-layered way. Interoperability is not treated as a purely technical issue but as a system spanning ethical alignment, legal coordination, and standards compatibility. A clear conceptual framework distinguishes ethical, legal, and technical interoperability and explains how each operates across institutional, human, data, and technological layers.

Methodologically, the study applies a regulatory learning approach, combining document analysis with stakeholder engagement across China, South Korea, Singapore, and the UK. It focuses on three sectors where AI risks are both high and cross-border by design: autonomous vehicles, AI in education, and international data flows. For each jurisdiction and sector, the report compares objectives, regulators, ethical principles, binding measures, targeted frameworks, technical standards, and key risks.

A substantial section is devoted to interoperability barriers. These include the voluntary nature of AI ethics, uneven adoption of international frameworks, fragmented liability models, lack of globally inclusive benchmarks, and asymmetries between Global North and Global South participation. The analysis shows how these barriers translate into real governance failures, such as unclear accountability in autonomous driving or incompatible ethical assumptions in educational AI systems.

The final sections outline detailed recommendations structured around ethical, regulatory, and technical interoperability. These include ethical self-certification reports coordinated through the UN, global AI safety benchmarks, interoperable liability models, coordinated data-flow safeguards, consensus-driven standards via ISO/IEC/IEEE, and scenario planning for catastrophic AI risks. A phased implementation timeline clarifies priorities and sequencing.

💡 Why it matters?

For policymakers and governance leads, this report offers a realistic blueprint for global coordination without requiring a single global AI law. It shows how interoperability can make AI Act-style regulation, international standards, and ethics frameworks mutually reinforcing rather than competing. In practice, it helps reduce compliance friction, improves cross-border trust, and strengthens AI safety assurance in high-risk domains. It also directly supports ongoing work around the Global Digital Compact, international AI safety summits, and emerging assurance regimes.

❓ What’s Missing

The report is primarily policy-oriented and stops short of providing hands-on implementation playbooks for organisations. More concrete examples of how companies could operationalise self-certification or map their internal controls to interoperability instruments would strengthen its practical uptake. The EU AI Act is referenced implicitly but not deeply analysed as a live interoperability stress test, which would be valuable given its global impact.

👥 Best For

Policy makers, AI regulators, standards bodies, international organisations, and senior governance professionals designing cross-border AI safety frameworks. It is also highly relevant for researchers and advisors working at the intersection of ethics, regulation, and technical standards.

📄 Source Details

Policy Report, United Nations University (UNU), Institute in Macau. Published October 2025. Focus on AI safety governance across China, South Korea, Singapore, and the United Kingdom.

📝 Thanks to

The authors and country report contributors coordinated by UNU, whose comparative and structured approach makes this one of the most practically useful interoperability resources currently available.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.