⚡ Quick Summary
This consultation paper sets out the Monetary Authority of Singapore’s proposed Guidelines on Artificial Intelligence Risk Management for financial institutions. It formalises supervisory expectations for how AI risks should be governed, identified, assessed, and controlled across the full AI lifecycle. The document builds on MAS’s earlier FEAT principles and recent work on Generative AI, shifting from ethical principles to enforceable risk management expectations. It introduces a proportionate, risk-based framework applicable to all financial institutions, regardless of size, while explicitly addressing newer technologies such as Generative AI and autonomous AI agents. The Guidelines emphasise board-level accountability, enterprise-wide AI inventories, risk materiality assessments, lifecycle controls, and operational readiness. Overall, this is a comprehensive attempt to translate “responsible AI” into concrete governance, control, and supervisory practices for the financial sector.
🧩 What’s Covered
The paper is structured around a complete AI risk management framework tailored for financial institutions. It starts by defining the scope of AI broadly, covering models, systems, and use cases, while excluding rule-based tools. Applicability is universal, but implementation is explicitly proportionate, depending on how materially AI is embedded in business processes.
A central pillar is AI oversight. Boards and senior management are assigned explicit responsibilities for AI risk governance, including setting risk appetite, approving frameworks, ensuring organisational capabilities, and maintaining sufficient AI literacy. Where overall AI risk exposure is material, institutions are expected to establish dedicated cross-functional AI risk committees.
The Guidelines then move into core AI risk management systems. Financial institutions are expected to consistently identify AI usage across the organisation, maintain an accurate and up-to-date AI inventory, and perform structured risk materiality assessments. Materiality is assessed along three minimum dimensions: impact, complexity, and reliance, considering both inherent and residual risk.
The most detailed section concerns AI lifecycle controls. Expectations span data management, transparency and explainability, fairness, human oversight, third-party AI risk management, model and feature selection, evaluation and testing, technology and cybersecurity, reproducibility and auditability, pre-deployment reviews, post-deployment monitoring, incident management, change management, and decommissioning. Special attention is paid to Generative AI and AI agents, including hallucinations, autonomy risks, adversarial attacks, and concentration risk.
Finally, the paper addresses AI capability and capacity. Institutions must ensure adequate skills, training, resources, and technology infrastructure to support safe and effective AI deployment. An annex provides practical guidance on proportionate application, with concrete examples distinguishing assistive AI use from AI that is integrated into core business processes.
💡 Why it matters?
This consultation represents a major step in operationalising AI governance for the financial sector. It moves beyond high-level ethics and into concrete supervisory expectations that boards, risk functions, and business teams can be held accountable for. By explicitly covering Generative AI and AI agents, MAS signals that emerging AI risks are no longer hypothetical. The emphasis on inventories, materiality, lifecycle controls, and cross-functional oversight closely mirrors how financial risk is already managed, making AI risk management more actionable and auditable. For institutions operating globally, the Guidelines also position Singapore as a jurisdiction with mature, implementation-focused AI supervision.
❓ What’s Missing
While comprehensive, the Guidelines remain high-level and principles-based in several areas. There is limited guidance on how to operationalise materiality scoring in practice, especially for complex GenAI use cases. The interaction between these Guidelines and existing model risk management frameworks could be clearer, particularly for institutions already operating under multiple supervisory regimes. The paper also leaves open how supervisory enforcement or examinations will assess compliance maturity over time. Finally, while AI agents are acknowledged as high-risk, concrete expectations for agent-specific controls remain relatively abstract.
👥 Best For
This document is best suited for board members, senior management, risk and compliance leaders, model risk teams, and AI governance professionals in financial institutions. It is also highly relevant for regulators, supervisors, and policy professionals looking for a practical reference model for AI risk management. Vendors and third-party AI providers serving regulated financial institutions will also find it useful for understanding forthcoming client expectations.
📄 Source Details
Monetary Authority of Singapore
Consultation Paper P017-2025
Published: November 2025
Public consultation deadline: 31 January 2026
📝 Thanks to
Monetary Authority of Singapore and the industry participants involved in FEAT, Veritas, and Project MindForge for advancing practical, risk-based AI governance in the financial sector.