AI Governance Library

Supervision of Artificial Intelligence in Finance: Challenges, Policies and Practices

“Financial supervision therefore serves as the practical enforcement mechanism of financial regulation, ensuring that policies translate into effective oversight and resilient financial markets.”
Supervision of Artificial Intelligence in Finance: Challenges, Policies and Practices

⚡ Quick Summary

This OECD Artificial Intelligence Paper (No. 54, January 2026) examines how financial supervisors are adapting existing, technology-neutral regulatory frameworks to the rapid adoption of advanced AI, including GenAI and emerging agentic systems. Rather than calling for new AI-specific financial regulation, the report focuses on supervisory practice: interpretation, implementation, and enforcement of existing rules in a context of increasing model complexity, opacity, and third-party dependency. The core message is clear: most jurisdictions already have “enough law,” but not yet enough supervisory clarity, tools, data, and skills. The paper maps reported supervisory challenges, compares national approaches, and highlights practical mechanisms—guidance, sandboxes, SupTech, and cross-border coordination—that can help regulators balance innovation with market stability, integrity, and consumer protection. 

🧩 What’s Covered

The report is structured around three pillars: supervisory approaches, challenges, and emerging practices. First, it explains how AI oversight in finance is grounded in risk-based and technology-neutral supervision, with jurisdictions relying either on legacy principles-based frameworks (e.g. UK), AI-specific guidance layered onto financial rules (e.g. Singapore), or cross-sectoral AI regulation integrated into financial supervision (e.g. EU AI Act). The OECD stresses that regulatory fragmentation and overlapping mandates risk creating uncertainty for both supervisors and firms.

Second, the paper provides a detailed taxonomy of supervisory challenges. These include limited visibility into AI adoption, gaps in monitoring data, heavy reliance on non-supervised third-party providers, and the increasing opacity of advanced models. Particular attention is paid to model risk management, explainability, robustness, bias and fairness, data governance, and the practical meaning of “human in the loop,” especially as systems become more autonomous. The analysis shows that supervisory difficulties largely mirror compliance difficulties faced by firms.

Third, the report outlines supervisory practices that can help close these gaps without undermining innovation. These include carefully calibrated interpretative guidance, regulatory sandboxes and live AI testing, enhanced public-private dialogue, investment in supervisory upskilling, and the deployment of AI-enabled SupTech tools. Case examples—from the UK FCA’s AI Live Testing to the ECB’s SupTech Hub—illustrate how supervisors are expanding their toolkits while staying aligned with principles-based regulation. 

💡 Why it matters?

This paper reframes AI governance in finance as a supervisory problem rather than a legislative one. It shows that trust, stability, and consumer protection increasingly depend on how supervisors interpret and operationalise existing rules in practice. For policymakers, it highlights the risk of regulatory over-layering and the importance of coordination. For supervisors, it underscores the urgency of technical capacity, shared taxonomies, and AI-aware supervisory tools. For financial institutions, it signals that AI innovation will be judged less by novelty and more by governance, robustness, accountability, and alignment with long-standing financial risk principles. 

❓ What’s Missing

The report deliberately avoids prescribing concrete technical thresholds for explainability, robustness, or fairness, which may leave practitioners wanting more operational benchmarks. It also touches only lightly on enforcement and sanctions, focusing instead on guidance and dialogue. Finally, while agentic AI is acknowledged as a future challenge, the paper stops short of offering supervisory models for highly autonomous systems acting across institutional or market boundaries. 

👥 Best For

Financial supervisors, central banks, market regulators, and policy teams responsible for AI oversight in finance. Also highly relevant for compliance leaders, AI governance professionals, and legal teams in regulated financial institutions seeking to understand supervisory expectations beyond formal regulation. 

📄 Source Details

OECD Artificial Intelligence Papers, No. 54
Title: Supervision of Artificial Intelligence in Finance: Challenges, Policies and Practices
Published: January 2026
OECD Directorate for Financial and Enterprise Affairs 

📝 Thanks to

OECD Committee on Financial Markets and the national supervisory authorities and experts who contributed empirical insights and case examples to this report. 

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.