Regulation

EU AI Act Article 73: Serious Incident Reporting from August 2026

Article 73 of the AI Act requires high-risk AI providers to report serious incidents within 15 days, with shorter clocks of 2 days for critical infrastructure and 10 days for death.

Nayan Dey
Senior Security Engineer
7 min read

Article 73 of Regulation (EU) 2024/1689 — the EU AI Act — obliges providers of high-risk AI systems placed on the Union market to report serious incidents to market surveillance authorities. The obligation begins to apply on 2 August 2026, the date by which the high-risk AI rules under Title III of the Act enter into application for systems classified under Annex III. In April 2025 the Commission opened consultation on draft guidance and a reporting template; a final version is expected to apply from 2 August 2026 alongside Article 73 itself. The provision is short — a single article with eleven paragraphs — but it intersects with several adjacent regimes including medical device vigilance (under MDR/IVDR), product safety notification (under the General Product Safety Regulation), and Article 14 of the Cyber Resilience Act for AI-enabled products with digital elements.

What is a "serious incident" under Article 73?

Article 3(49) defines a serious incident as an incident or malfunctioning of an AI system that directly or indirectly leads to any of:

# Serious incident triggers under Article 3(49)

(a) Death of a person, or serious harm to a person's health

(b) Serious and irreversible disruption of the management or
    operation of critical infrastructure

(c) Infringement of obligations under Union law intended to
    protect fundamental rights

(d) Serious harm to property or to the environment

The four categories are deliberately broad. Category (a) — death or serious harm to health — captures clinical AI misclassification, autonomous vehicle injury events, and AI-driven content harms with documented psychological injury. Category (b) — critical infrastructure disruption — captures AI failures in energy grid management, water supply, transport systems, and similar contexts. Category (c) — fundamental rights infringement — captures biased algorithmic decisions affecting employment, credit, social benefits, education, or law enforcement. Category (d) — property or environmental harm — captures industrial AI failures with material physical consequences. The definition explicitly includes both direct causation and indirect contribution.

Who is in scope?

Article 73(1) imposes the obligation on providers of high-risk AI systems. "High-risk" is defined under Article 6 as systems intended to be used as safety components of products covered by Union harmonisation legislation listed in Annex I, or systems falling within the use-case categories in Annex III. Annex III covers eight use-case clusters: biometric identification and categorisation, critical infrastructure management, education and vocational training, employment and worker management, access to essential services, law enforcement, migration and border control, and administration of justice and democratic processes. The obligation rests on providers — entities that develop AI systems and place them on the EU market — rather than deployers, though deployers have parallel obligations under Article 26 to inform providers of detected serious incidents.

What are the reporting deadlines?

Article 73(2)-(4) impose tiered deadlines based on incident severity:

| Incident type | Deadline | Trigger | |---------------|----------|---------| | General serious incident | 15 days | After causal link is established between AI system and incident | | Widespread infringement or significant disruption | 2 days | After becoming aware of the incident | | Death of a person | 10 days | After establishing or suspecting causal relationship |

The text accommodates uncertainty. Article 73(5) permits an initial incomplete report followed by a complete report when more information becomes available. The fifteen-day clock starts from causal link establishment, not from the moment the AI system was implicated — a critical distinction that gives providers time to investigate. The two-day clock for widespread infringement or significant disruption to critical infrastructure runs from awareness, not from causal link establishment, because the regulator's need for early situational awareness in those scenarios outweighs the value of additional investigation time.

Where do reports go?

Reports go to the market surveillance authority of the Member State where the incident occurred. Where the incident has cross-border effect, multiple authorities may receive notifications and a coordination mechanism under Article 70 manages the multilateral handling. The Commission's draft guidance recommends a structured template covering AI system identification (including unique identifier under Article 49), provider details, incident description and timeline, affected parties and harm assessment, root cause analysis (where available at time of reporting), and mitigation actions taken or planned. The AI Office at the Commission level receives copies of all serious incident reports involving general-purpose AI models with systemic risk under Article 55.

How does Article 73 interact with other reporting regimes?

The AI Act explicitly accommodates overlap with other regulatory regimes. Article 73(6) provides that where the AI system is integrated into a medical device subject to MDR/IVDR vigilance, the medical device reporting regime applies and the provider reports to the medical device authority rather than dual-reporting. Similar deference applies to other product safety regimes under Annex I. The Cyber Resilience Act's Article 14 reporting regime for actively exploited vulnerabilities and severe security incidents is complementary rather than overlapping — a security breach of an AI system may trigger both Article 73 (AI Act) and Article 14 (CRA) reporting if the system both meets the high-risk AI definition and qualifies as a product with digital elements. The Commission has indicated that a single integrated submission through the ENISA single reporting platform may be possible once the platform is operational from 11 September 2026.

What about general-purpose AI models?

Article 55 imposes separate obligations on providers of GPAI models with systemic risk, including the reporting of serious incidents to the AI Office. Where a GPAI model is also integrated into a downstream high-risk AI system, Article 73 obligations attach to the provider of that high-risk system, not the GPAI provider. The architecture creates a layered reporting model: GPAI providers report systemic-risk incidents under Article 55; downstream high-risk system providers report serious incidents under Article 73; deployers inform providers under Article 26. A single underlying event may surface through multiple reporting channels — the GPAI model fault triggers Article 55 notification, the downstream high-risk system failure triggers Article 73 notification, the impacted deployer logs internally under Article 26.

What are the penalties?

Article 99 sets administrative fines at up to 15 million EUR or 3% of total worldwide annual turnover for the preceding financial year — whichever is higher — for failures to comply with obligations applicable to providers under Articles 16 through 27, which includes the chain of obligations that surface Article 73 incidents internally and route them externally. The Commission has indicated that its formal enforcement posture during the first year (from 2 August 2026) will prioritise structural compliance with reporting machinery over individual incident response — i.e., a provider that has a documented reporting workflow and a good-faith attempt at meeting deadlines will face less severe enforcement than one whose internal processes prevent reports from being generated at all.

How should providers prepare?

Three operational changes need to be in place before August 2026. First, an incident classification taxonomy that maps internal events to Article 73's four serious incident categories — many AI providers have incident terminology that does not align with the AI Act's vocabulary, and a translation layer is needed. Second, a 15-day clock that starts at causal link establishment requires an internal investigation cadence that delivers a causal link determination quickly enough to leave time for report drafting and external submission. Third, market surveillance authority contact points need to be identified per Member State and rehearsed — the EU does not currently have a single endpoint for AI Act incident reporting, although the Commission's draft guidance indicates this will evolve.

How Safeguard Helps

Safeguard's AI-BOM module catalogues every high-risk AI system in your inventory against the Annex III use-case categories and the Annex I harmonisation legislation triggers, so the Article 73 scope question is answered before an incident occurs. The platform's incident workflow maps internal event taxonomy onto the four serious incident categories of Article 3(49) and runs the 2-day, 10-day, and 15-day clocks in parallel with the causal investigation. Market surveillance authority routing is pre-configured per Member State, with parallel submission to the AI Office for systemic-risk GPAI events under Article 55. Where the same event triggers both AI Act and CRA reporting, Safeguard generates aligned submissions through a single triage event rather than forcing duplicate workflows.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.