AI Governance Library

Incident Report for Serious Incidents under the AI Act (High-risk AI systems)

A seven-page, form-style reporting template (v1.0.0, SB-10407) for notifying market surveillance authorities of serious incidents involving high-risk AI systems. It structures admin data, system details, incident description, provider analysis, and general statements.  
Incident Report for Serious Incidents under the AI Act (High-risk AI systems)

⚡ Quick Summary

This document is a standardized EU reporting template for “serious incidents” linked to high-risk AI. It guides providers, deployers, or authorized representatives through mandatory fields: who is reporting, when the incident occurred and was detected, how it is classified (e.g., death, health harm, disruption of critical infrastructure, FR infringements), what system is involved (including EU database ID, versions), what happened, immediate mitigations, and the provider’s preliminary and final analyses. It ends with a non-admission clause and affirmation statement. In short, it is the operational bridge between incident detection and regulatory oversight, aligning evidence capture, root-cause analysis, and risk-assessment review in one place. (See pages 1–7 for section layout.)  

🧩 What’s Covered

  • Section 1: Administrative information (p.1–2). Captures the competent market surveillance authority, key dates (submission, incident window, detection, provider awareness), report type (initial, follow-up, combined, final), and classification of the serious incident (death; harm to health; management/operation disruption of critical infrastructure; infringement of Union fundamental-rights obligations; harm to property/environment; other). Includes detailed submitter identity blocks for provider, authorized representative, or other reporter.  
  • Section 2: AI System information (p.4). Records the EU database registration ID, system name and intended purpose, model/catalogue/serial and lot numbers, and software/firmware versions—key for traceability and later conformity checks.  
  • Section 3: Incident information (p.5). Requires a comprehensive narrative: what failed, observed effects, likely causality, estimated users affected, operator type at the time (e.g., professional user), and remedial actionsalready taken. Also requests contact details for the initial reporter.  
  • Section 4: Provider analysis (p.6). For initial/follow-up: preliminary conclusions and immediate corrective/preventive actions, plus planned investigations. For final reports: root-cause evaluation and conclusion, or a rationale if deemed non-reportable. Critically, it asks whether the risk assessment was reviewed and if it remains adequate, with space to summarize results—creating a documented link to post-incident risk management.  
  • Section 5: General comments & declaration (p.7). Allows free-text context, then provides a non-admission statement clarifying the report does not concede fault or causation, followed by an affirmation that the information is correct to the reporter’s knowledge, with date.  

Why It Matters

For organizations operating high-risk AI, this template operationalizes AI Act incident duties: it structures timely notification, ensures consistent data for supervisory scrutiny, and ties incident facts to risk-assessment updates and CAPA (corrective and preventive actions). It is also a practical checklist for crisis playbooks—what to capture, who to inform, and how to evidence root-cause and proportional remediation for regulators and stakeholders. Using it proactively (e.g., in tabletop exercises) can shorten response time, reduce legal exposure, and improve cross-team coordination during real events.  

❓ What’s Missing

  • Guidance notes/examples. The form is concise but offers no definitions, severity thresholds, or sample entries (e.g., what constitutes “management” vs. “operation” disruption of critical infrastructure), which practitioners will need to standardize triage.  
  • Timing expectations. The template references initial/follow-up/final reports but doesn’t embed statutory deadlines or escalation triggers—teams must source these from the AI Act and national implementing guidance.  
  • Data handling instructions. There’s no built-in advice on confidentiality, personal data minimization, or secure evidence storage while compiling the report.  
  • Interplay with other regimes. A small prompt exists to note other obligations, but no mapping to, e.g., GDPR breach notification, product-safety alerts, sectoral regulators, or cybersecurity authorities.  

👥 Best For

  • Providers of high-risk AI who must coordinate regulatory notifications.
  • Deployers running high-risk systems in production who may be initial reporters.
  • Legal, risk & compliance teams building incident-response SOPs and evidence templates.
  • Product & engineering leaders responsible for traceability (versions, lots, firmware) and corrective actions. 

📄 Source Details

Title: Incident Report for Serious Incidents under the AI Act (High-risk AI systems) — Reporting Template Version 1.0.0 (SB-10407); 7 pages; structured sections 1–5 with form fields and declarations. Refer to page 1 for title/version, pages 1–2 (admin), page 4 (system), page 5 (incident), page 6 (analysis), page 7 (general comments/affirmation).  

📝 Thanks to

Acknowledgment to the template’s contributors within EU market-surveillance and conformity-assessment communities who standardized the reporting structure for high-risk AI incidents.  

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.