⚡ Quick Summary
This draft ETSI European Standard lays down baseline cyber-security requirements for AI systems across five lifecycle phases: secure design, development, deployment, maintenance, and end-of-life. It defines stakeholder roles (Developers, System Operators, Data Custodians, End-users, Affected entities) and attaches concrete “shall/should” provisions to 13 principles. Highlights include AI-specific threat modeling (poisoning, inversion), secure APIs and least-privilege environments, audit trails for models/datasets/prompts, supply-chain controls, independent security testing, transparent end-user communications, continuous monitoring, and secure decommissioning. It signposts related frameworks (EU AI Act, NIST AI RMF, ENISA, OWASP, MITRE ATLAS) and points to ETSI conformance assessment guidance. For CISOs, MLOps leads, and AI platform owners, it’s a pragmatic baseline to operationalize AI security in enterprise settings.
🧩 What’s Covered
Scope & intent. The document targets deployed AI (including generative) and positions “AI security” as a subset of cybersecurity, offering baseline, testable provisions rather than research guidance. It maps principles to ISO/IEC 22989 lifecycle stages and references ETSI TS 104 216 for evaluation and validation.
Stakeholders. A concise model clarifies responsibilities for Developers, System Operators, Data Custodians, End-users, and Affected entities; a tabular definition appears on p. 9 to help align security ownership across the AI supply chain.
Five phases, 13 principles.
- Secure Design (P1–P4): role-based AI security training; security-by-design alongside functionality; documented audit trails for models/datasets/prompts; due diligence on external components; permissions minimization when integrating with other systems; explicit, AI-specific threat modeling covering poisoning/inversion/membership inference; human oversight capabilities and verifiable controls.
- Secure Development (P5–P9): asset inventories (including interdependencies); versioning/authentication for AI assets; disaster recovery for AI-specific attacks; data/input checks and sanitization; protection for confidential training data/weights; access control frameworks for APIs/models/pipelines; dedicated, least-privilege training/tuning environments; vulnerability disclosure policy; incident and recovery plans; secure software supply chain practices (incl. justification and compensating controls for poorly documented components); re-evaluating released models; pre-update comms to end-users; thorough documentation of design and maintenance with security-relevant details (sources of training/fine-tuning data, guardrails, failure modes) and cryptographic hashes for shared model components; provenance logging for public data (URL and timestamp); audit logs for system-prompt changes.
- Secure Deployment (P10): clear end-user comms on data use, access, storage (e.g., retraining, human review); accessible guidance on limitations/failure modes; proactive security-update notices; incident support commitments documented contractually.
- Secure Maintenance (P11–P12): timely security updates/patches (with contingency when updates aren’t possible), major-update re-testing, versioned/beta channels; logging of system and user actions; anomaly and drift detection; internal-state monitoring to enable analytics and threat response; longitudinal performance monitoring.
- Secure End-of-Life (P13): controlled transfer/disposal of training data/models with Data Custodian involvement; secure deletion of data and configurations on decommissioning.
Reference ecosystem. Informative references span EU AI Act, ISO/IEC 27001, NIST AI RMF/AML taxonomy, ENISA, NCSC, OWASP AI Exchange, MITRE ATLAS, and vendor frameworks—helpful for mapping requirements into existing controls.
💡 Why it matters?
Enterprises are rapidly embedding AI into production stacks where traditional AppSec alone falls short. This draft standard translates scattered AI-security ideas into lifecycle-anchored, role-assigned, “shall/should” provisions that can plug into ISMS and SDLC processes. It operationalizes essentials—like provenance for public data, cryptographic verification of model artifacts, secure API exposure, and incident/recovery planning—so organizations can reduce poisoning/abuse risks, prevent leakage or reverse-engineering, and keep users informed. With explicit pointers to conformance assessment and adjacent frameworks, it offers a credible path to demonstrable due diligence under the EU AI Act and sectoral expectations.
❓ What’s Missing
- Depth on measurement: Few concrete metrics (e.g., robustness thresholds, logging retention minima, alerting SLAs) to prove control effectiveness.
- Open-source nuance: Limited differentiated guidance for open-weight/OSS models beyond general cautions.
- Agentic/retrieval specifics: Little on tool-use agents, RAG chains, or MCP-style capabilities (prompt-injection surfaces, data exfiltration via tools).
- Privacy engineering detail: References data protection roles but doesn’t specify privacy-by-design patterns (e.g., DP/noising, federated learning, subject-rights handling).
- Assurance artifacts: Mentions hashes and documentation, but lacks standardized templates (model/system cards, SBOM-for-models, evaluation reports) and watermarks/traceability expectations.
👥 Best For
CISOs and security architects integrating AI; MLOps and platform teams building/tuning models; procurement and vendor-risk managers assessing AI suppliers; compliance/legal teams mapping to EU AI Act obligations; product leaders needing a baseline for AI-security policy and playbooks.
📄 Source Details
Draft ETSI EN 304 223 V2.0.0 (2025-09), produced by ETSI Technical Committee SAI; 16 pages; in EN Approval Procedure (ENAP). Includes 13 principles across five phases and pointers to ETSI TR 104 128 (guide) and ETSI TS 104 216 (conformance). Title page, contents, stakeholder table (p. 9), and principle-by-principle provisions (pp. 10–15).
📝 Thanks to
ETSI Technical Committee SAI and contributors to the referenced frameworks and guidance ecosystems acknowledged within the draft.