AI Governance Library

2024 CCAPAC Report: AI and Cybersecurity

This report by the Coalition for Cybersecurity in Asia-Pacific offers a comprehensive framework for managing cybersecurity risks tied to AI systems, particularly in sectors critical to the region’s growth.
2024 CCAPAC Report: AI and Cybersecurity

⚡ Quick Summary

The 2024 CCAPAC Report: AI and Cybersecurity addresses the dual role of AI as both a potential cybersecurity asset and a source of new vulnerabilities. Produced by the Coalition for Cybersecurity in Asia-Pacific (CCAPAC), the report explores how AI is reshaping critical sectors like healthcare, finance, and transportation, and provides a robust framework for mitigating emerging AI-related cybersecurity risks. Drawing from regional regulations and international standards, it presents both technical and policy-level recommendations. It also proposes six foundational pillars for a region-wide AI cybersecurity framework, tailored to the diverse regulatory landscape of the Asia-Pacific. This report is especially timely given the rapid proliferation of generative AI and the increasing use of AI in national infrastructure.

🧩 What’s Covered

The report is structured into six chapters:

  1. Introduction – Frames AI as a transformative technology with unique cybersecurity risks due to its autonomous, adaptive, and complex nature.
  2. Current AI Landscape – Differentiates between predictive and generative AI systems and provides a clear taxonomy of AI types and use cases across seven sectors, including:
    • Government and public services (e.g., Singapore’s Smart Nation)
    • Healthcare (e.g., AI in diagnosis and robotic surgery)
    • Finance (e.g., fraud detection and robo-advisors)
    • Transportation (e.g., autonomous vehicles)
    • Energy, Manufacturing, and Retail
  3. AI and Cybersecurity – Outlines four categories of cybersecurity risks:
    • Data Risks (e.g., data poisoning, inference attacks)
    • Model Risks (e.g., evasion, backdoors, model theft)
    • Infrastructure Risks (e.g., DoS, supply chain threats)
    • Application Risks (e.g., prompt injection, ethical misuse)
  4. Framework for Addressing AI Cybersecurity Risks – Proposes a six-component governance framework:It calls for alignment with standards like ISO/IEC 42001, NIST AI RMF, and regional laws (e.g., Singapore PDPA, Australia’s SOCI Act).
    • Guidance & oversight
    • Lifecycle management
    • Data governance
    • Model security
    • Transparency & accountability
    • Incident response
  5. Policy Recommendations – Calls for:
    • AI-specific cybersecurity strategies and guidelines
    • R&D investment
    • Workforce and AI literacy development
    • Cross-border regulatory harmonization
  6. Conclusion – Emphasizes collaboration, shared standards, and anticipatory regulation as key to a secure AI future in the Asia-Pacific.

💡 Why it matters?

AI deployment is surging across critical infrastructure, yet regional cybersecurity frameworks often lag behind in recognizing AI-specific threats. This report fills that gap by offering a regionally contextualized, risk-based model for integrating AI into national cybersecurity planning. The proposed framework is especially relevant for policymakers tasked with updating national strategies, regulators seeking international alignment, and organizations building AI-enabled services. Its emphasis on aligning with existing laws and international standards like ISO 42001 and NIST AI RMF bridges the regulatory and technical domains—a necessity for practical AI governance. It also supports proactive security-by-design practices during AI development, not just post-deployment defenses.

❓ What’s Missing

  • Enforcement mechanisms: While policy alignment and standards are discussed, the report doesn’t cover how to enforce compliance, particularly in cross-border contexts.
  • Sector-specific implementation guides: There is a general overview of sector applications, but detailed roadmaps or case studies for implementation are lacking.
  • Risk quantification: The framework would benefit from including threat modeling templates or metrics for assessing AI-specific risk.
  • Stakeholder roles: While multi-stakeholder involvement is encouraged, responsibilities for governments, industry, and academia are not clearly delineated.

👥 Best For

  • Policy-makers and national cybersecurity agencies in Asia-Pacific updating or drafting AI-related strategies
  • Regulators and AI auditors seeking alignment with international best practices
  • Corporate cybersecurity teams managing AI deployment in critical infrastructure
  • Academic researchers and AI developers exploring governance, risk, and compliance
  • Think tanks and intergovernmental bodies shaping cross-border AI safety cooperation

📄 Source Details

Title2024 CCAPAC Report: AI and Cybersecurity

Authors: Coalition for Cybersecurity in Asia-Pacific (CCAPAC)

Published: 2024

Publisher: https://ccapac.asia

Length: 36 pages

Cited Standards: ISO/IEC 42001, ISO/IEC 23894, NIST AI RMF, OWASP AI Security Guide, MITRE ATLAS

Referenced Regulations: Singapore PDPA, Australia’s SOCI Act, Japan’s AI Guidelines, South Korea’s PIPA

📝 Thanks to

The Coalition for Cybersecurity in Asia-Pacific for producing this timely and actionable policy guide, and for integrating international AI governance best practices into an Asia-Pacific context.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.