⚡ Quick Summary
This report presents a 2025 snapshot of how organizations are securing AI while simultaneously using AI to strengthen security itself. Based on a global survey of 300 IT and security leaders, it shows a widening gap between organizations with mature AI governance and those operating without it. Governance maturity emerges as the single strongest predictor of secure, scalable AI adoption. Security teams are no longer laggards; they are becoming early adopters of AI, including agentic systems, even as leadership confidence in securing AI remains uneven. The report positions governance not as a compliance layer, but as the practical foundation enabling trustworthy AI at scale, especially as generative and agentic AI move from pilots into core business operations.
🧩 What’s Covered
The report is structured around six key findings that together map the current AI security and governance landscape. It begins by demonstrating that formal AI governance correlates strongly with organizational maturity: entities with comprehensive governance are significantly more likely to adopt agentic AI, train staff on AI security tools, and express confidence in protecting AI systems. Governance also reduces “shadow AI” by enabling structured, sanctioned adoption rather than informal workarounds.
A major shift highlighted is the role of security teams. Over 90% of organizations are already testing or planning to use AI for security use cases such as threat detection, red teaming, and access control. Nearly two-thirds plan to deploy agentic AI in security workflows within a year, marking a clear inflection point where security is shaping AI adoption instead of reacting to it.
The report also details enterprise LLM adoption patterns. Most organizations now operate multi-model environments, using an average of 2.6 models, but adoption is consolidating around a small group of dominant providers (GPT, Gemini, Claude, LLaMA). This introduces new governance challenges related to vendor concentration, resilience, and interoperability.
Leadership dynamics are examined in depth. While executive enthusiasm for AI remains high, 72% of respondents are neutral or not confident in their organization’s ability to secure AI. This disconnect underscores gaps in risk understanding, skills, and cross-functional coordination.
Ownership models are still evolving. AI deployment responsibility is distributed across AI/ML teams, IT, and cross-functional groups, while security ownership is consolidating, with security teams leading AI protection efforts in over half of organizations. Funding responsibility mirrors this hybrid model, spanning CISOs, CTOs, CIOs, and business units.
Finally, the report analyzes persistent challenges: understanding AI risks, closing skills gaps, and managing data exposure. Sensitive data leakage and regulatory compliance dominate risk perceptions, while model-level threats such as prompt injection, model drift, and data poisoning remain underprioritized despite their growing relevance.
💡 Why it matters?
The report makes a compelling case that AI governance is no longer optional infrastructure. It directly links governance maturity to operational confidence, workforce readiness, and responsible innovation. For organizations scaling generative or agentic AI, it highlights a critical risk: treating AI security as an extension of cloud or privacy controls is insufficient. The findings reinforce the need to integrate governance frameworks, such as CSA AICM and Secure AI Frameworks, into enterprise risk management before AI becomes deeply embedded and harder to control.
❓ What’s Missing
While the report is strong on diagnosis, it is lighter on prescriptive implementation detail. It does not provide concrete governance operating models, role definitions, or maturity benchmarks that organizations could directly adopt. Model-level safety risks are acknowledged but not explored with the same depth as data and compliance risks, leaving a gap for practitioners seeking guidance on operationalizing TEVV, behavioral monitoring, or agent oversight in production systems.
👥 Best For
CISOs, AI governance leads, security architects, and executives responsible for scaling AI safely across the enterprise. It is especially valuable for organizations transitioning from AI experimentation to production and looking to justify investment in governance, training, and security capabilities.
📄 Source Details
Cloud Security Alliance & Google Cloud, The State of AI Security and Governance, Survey Report, 2025. Based on responses from 300 IT and security professionals across regions, industries, and organization sizes.
📝 Thanks to
Hillary Baron (Lead Author), with contributions from Stephen Lawton, Daniele Catteddu, Rich Mogull, John Yeoh, Anton Chuvakin, and Douglas Ko.