The world's first comprehensive AI regulation — risk-based, with phased prohibitions, transparency duties, and obligations for high-risk and general-purpose AI.
Providers, deployers, importers, and distributors of AI systems placed on the EU market or whose output is used in the EU.
Up to €35M or 7% global turnover (prohibited practices); €15M or 3% (other infringements); €7.5M or 1.5% (incorrect/incomplete info).
These are the obligations a regulated entity owes — the things an assessor or supervisor will ask about.
Prohibited practices (Art. 5) — social scoring, real-time biometric ID in public spaces (with exceptions), emotion inference in workplaces/schools, etc.
High-risk AI systems (Annex III) — risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity.
General-Purpose AI models — technical documentation, copyright policy, training data summary; systemic risk tier adds adversarial testing and incident reporting.
Transparency for AI interactions, emotion recognition, biometric categorisation, and synthetic content (Art. 50).
Post-market monitoring and serious incident reporting (Art. 73).
Conformity assessment and CE marking for high-risk systems.
Each requirement above is bound to live telemetry — not screenshots. The mapping below is what your auditor or regulator sees.
Model registry with risk-tier classification and Annex III flagging.
Technical documentation template per Annex IV.
Data governance evidence — provenance, quality, representativeness.
Adversarial test harness with model evaluation reports.
Synthetic content marking pipeline (C2PA, watermarking).
Serious incident reporting workflow per Art. 73 timelines.
Each evidence artifact is signed and timestamped. Auditors can verify integrity without trusting Safeguard.
AI system technical documentation (Annex IV) per registered model.
Conformity assessment record.
Post-market monitoring plan and findings.
Serious incident register.
These frameworks share substantial control overlap with EU AI Act. Customers running one assessment typically satisfy the others with the same evidence base.
North America
The 2023 US Executive Order on Safe, Secure, and Trustworthy Development and Use of AI — reporting requirements for foundation model developers and federal AI use governance.
APAC
Singapore's AI governance testing framework — voluntary toolkit and attestation for trustworthy AI.
APAC
China's Interim Measures for the Management of Generative AI Services — model registration, content labelling, training data review.
APAC
Korea's AI Framework Act — risk classification and obligations for AI providers, with phased entry into force.
European Union
The EU's General Data Protection Regulation — the global gravity well of privacy law since 2018.
Bring the framework. We'll walk the controls with you — section by section, evidence packet by evidence packet, with the regulators you actually have to answer to.