AI Governance Library

Guidelines for secure AI system development

Security-by-default for AI: practical, lifecycle-wide guidance to help providers build and operate AI systems that resist misuse, protect data, and remain reliable—even under attack.
Guidelines for secure AI system development

⚡ Quick Summary

This international guideline sets a clear baseline for securing AI systems across their full lifecycle—from design to deployment and maintenance. Jointly published by the UK’s NCSC, US CISA, and over 20 global cybersecurity agencies (including BSI, NSA, CSA, and NASK), it promotes security-by-design principles specifically tailored for ML-based systems. It’s not a checklist—it’s a shared reference that helps providers harden AI workflows against adversarial manipulation, data leakage, and downstream misuse. Structured around four pillars (secure design, development, deployment, and operation), it’s practical, focused, and aligned with NIST’s SSDF and CISA’s “Secure by Design” initiative .

🧩 What’s Covered

The guideline is built around four development phases, each with actionable sub-recommendations:

  1. Secure Design
    • Raise awareness of AI-specific threats (e.g. adversarial ML, prompt injection, model inversion)
    • Apply threat modelling early, across the full system and supply chain
    • Balance performance with controls: default to least privilege, sandbox third-party models, vet APIs
    • Integrate AI-specific guardrails, risk flags, and fallback options from the outset 
  2. Secure Development
    • Secure AI supply chains: use trusted components, apply SLSA, maintain SBOMs
    • Document models, datasets, prompts, and failure modes using tools like model cards
    • Identify and manage technical debt—especially relevant for fast-shipping ML teams 
  3. Secure Deployment
    • Apply cryptographic protections to models and data (e.g. hashes for integrity)
    • Limit access to model internals and outputs
    • Plan for and practice incident response tailored to AI systems
    • Release only after security evaluation (e.g. red teaming, audit logging, usage guidance) 
  4. Secure Operation and Maintenance
    • Monitor for model drift, misuse, and malicious input patterns
    • Design update mechanisms that balance automation with auditability
    • Participate in responsible disclosure and AI threat information-sharing forums 

Appendices and Links

The final section compiles references to tools (like IBM’s Adversarial Robustness Toolbox), ISO/IEC 5338, MITRE ATLAS, and Hiroshima Process principles—building a bridge between best practices and implementation-ready assets .

💡 Why it matters?

AI security is not just a technical add-on—it’s a governance imperative. This guide sets an internationally agreed floor for how to develop and maintain AI systems responsibly. It helps unify fragmented security practices under a shared vocabulary and lifecycle structure. For developers, it clarifies what “secure by design” really means in an ML context. For policymakers, it’s a credible tool to anchor procurement, auditing, and risk governance.

❓ What’s Missing

  • Implementation Examples: No real-world case studies or sample deployments.
  • Governance Integration: Minimal discussion of accountability layers or organizational policy hooks.
  • LLM-Specific Depth: Mentions prompt injection and user feedback corruption, but doesn’t go deep on agentic or autonomous AI safeguards.

👥 Best For

  • ML engineers building or scaling secure AI pipelines
  • Security teams securing deployed AI models or APIs
  • Procurement and oversight bodies assessing AI risk posture
  • Product leads setting internal standards or evaluating third-party tools
  • Regulators seeking alignment with global security-by-design norms

📄 Source Details

Title: Guidelines for Secure AI System Development

Authors: NCSC (UK), CISA (US), NSA, BSI, CSA Singapore, NASK, ANSSI, and 20+ national cyber agencies

Date: December 2023

Length: 20 pages

License: Open Government Licence v3.0 (UK)

Structure: 4 lifecycle phases with detailed sub-guidelines, + global best practice references

📝 Thanks to

The UK’s National Cyber Security CentreCISA, and the broad alliance of national cybersecurity authorities for shaping one of the most harmonized, technical-yet-practical documents on AI system security to date. Special thanks to industry contributors from OpenAI, DeepMind, IBM, Microsoft, Anthropic, and RAND for grounding it in real-world challenges.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.