Compliance & Regulations/European Union/EU AI Act
AI Regulation · European Union — extraterritorial

EU AI Act

The world's first comprehensive AI regulation — risk-based, with phased prohibitions, transparency duties, and obligations for high-risk and general-purpose AI.

Regulator
EU AI Office (European Commission) + national market surveillance authorities
Jurisdiction
European Union — extraterritorial
Status
Active — phased application Feb 2025 (prohibitions) through Aug 2027 (high-risk in regulated products).
In force since
1 August 2024 (entered into force)
Regulator's source
Who it applies to

Providers, deployers, importers, and distributors of AI systems placed on the EU market or whose output is used in the EU.

Penalties

Up to €35M or 7% global turnover (prohibited practices); €15M or 3% (other infringements); €7.5M or 1.5% (incorrect/incomplete info).

What it requires

What EU AI Act actually requires.

These are the obligations a regulated entity owes — the things an assessor or supervisor will ask about.

01

Prohibited practices (Art. 5) — social scoring, real-time biometric ID in public spaces (with exceptions), emotion inference in workplaces/schools, etc.

02

High-risk AI systems (Annex III) — risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity.

03

General-Purpose AI models — technical documentation, copyright policy, training data summary; systemic risk tier adds adversarial testing and incident reporting.

04

Transparency for AI interactions, emotion recognition, biometric categorisation, and synthetic content (Art. 50).

05

Post-market monitoring and serious incident reporting (Art. 73).

06

Conformity assessment and CE marking for high-risk systems.

How Safeguard maps to it

Pre-mapped controls. Continuous evidence.

Each requirement above is bound to live telemetry — not screenshots. The mapping below is what your auditor or regulator sees.

Model registry with risk-tier classification and Annex III flagging.

Technical documentation template per Annex IV.

Data governance evidence — provenance, quality, representativeness.

Adversarial test harness with model evaluation reports.

Synthetic content marking pipeline (C2PA, watermarking).

Serious incident reporting workflow per Art. 73 timelines.

Evidence we produce

Artifacts your auditor accepts.

Each evidence artifact is signed and timestamped. Auditors can verify integrity without trusting Safeguard.

AI system technical documentation (Annex IV) per registered model.

Conformity assessment record.

Post-market monitoring plan and findings.

Serious incident register.

Ready for EU AI Act?

Bring the framework. We'll walk the controls with you — section by section, evidence packet by evidence packet, with the regulators you actually have to answer to.

Safeguard | Software Supply Chain Security Platform | Zero CVE + Self-Healing