Compliance

EU AI Act Enforcement Begins: 2026 Reality Check

A 2026 reality check on EU AI Act enforcement: which obligations are active, what regulators expect, and the technical evidence enterprises must produce.

Shadab Khan
Security Engineer
9 min read

The EU AI Act was published in the Official Journal in 2024, and its obligations have been phasing in on a staggered timeline ever since. By 2026, several of the most operationally significant obligations are active, and European authorities have begun the enforcement phase that separates "on the books" from "in practice." This post is a senior-engineer reality check on what is in force, what regulators are actually asking for, and how programs should be structured to respond.

What obligations are in force in 2026?

The AI Act's prohibited-practices provisions, covering systems like social scoring, certain biometric categorization, and manipulative AI that causes significant harm, took effect in February 2025. The general-purpose AI model obligations, which include transparency documentation, copyright compliance disclosures, and model-card style evidence, came into force in August 2025. The governance and enforcement architecture, including the European AI Office, AI Board, scientific panel, and national authorities, is operational.

The high-risk system obligations, which are the most extensive and include risk management, data governance, record keeping, transparency, human oversight, accuracy and robustness, and cybersecurity requirements, are progressively binding, with general application in August 2026. Product-embedded AI systems in annex harmonization legislation have a longer timeline into 2027, but many enterprises have aligned to the 2026 baseline already.

The penalties are significant. Violations can reach up to 7% of worldwide annual turnover for prohibited-practice breaches and 3% for most other obligations, with an upper cap in absolute euro terms. National authorities have the power to impose these penalties directly, and early enforcement is focused on documentation, transparency, and governance failures.

How Safeguard.sh Helps

Safeguard.sh's Lino compliance maps EU AI Act obligations to specific technical controls and produces the documentation, record keeping, and evidence that Article 9-15 require. Griffin AI contributes to the cybersecurity requirements under Article 15 by evaluating AI model and code risk, SBOM lifecycle handles the technical documentation for model components, and TPRM covers supplier obligations. Enterprises produce AI Act evidence continuously rather than compiling it under deadline.

What are regulators actually asking for in early enforcement?

Early enforcement activity is concentrated on two areas. The first is general-purpose AI model documentation. Providers of foundation models operating in the EU market must publish technical documentation, respect copyright, and provide information to downstream deployers. Regulators have been testing the quality and completeness of this documentation as the most visible on-ramp to broader enforcement.

The second is high-risk system governance and transparency. Even before the full August 2026 application date, enterprises deploying AI in covered use cases, including hiring, credit, education, and critical infrastructure, have been preparing documented risk assessments, data governance plans, and human oversight frameworks. Regulators are reviewing these proactively in connection with broader inquiries.

Cybersecurity obligations under Article 15, which require providers of high-risk AI systems to design them with appropriate levels of accuracy, robustness, and cybersecurity, are starting to appear in practical guidance. Technical evidence including model provenance, training data integrity, and runtime security controls is expected as part of conformity documentation. This is where AI supply chain security and AI Act compliance meet most directly.

How Safeguard.sh Helps

Safeguard.sh produces the technical evidence Article 15 implies. Griffin AI evaluates model supply chain risk including provenance and weight integrity, SBOM lifecycle documents model and framework components, Lino compliance maps each evidence artifact to its AI Act obligation, and container self-healing keeps AI workload base images current. The evidence is continuous, machine-readable, and available for inspection.

How is the AI Act interacting with the EU Cyber Resilience Act?

The AI Act and the EU Cyber Resilience Act overlap for many products. AI systems that are also products with digital elements under the CRA are subject to both frameworks, and the CRA's cybersecurity requirements fold into the AI Act's Article 15 in practice. Providers that address both in an integrated program have a simpler compliance story than those that try to run parallel processes.

The European Commission and the standardization bodies have been actively harmonizing the two frameworks. Expect continued alignment through 2026 and 2027, with standards documents that serve both regimes reducing duplication. In the meantime, practitioners should treat the two frameworks as complementary and design controls that satisfy both.

For enterprise buyers of AI systems embedded in products, the practical implication is that procurement questionnaires will combine AI Act and CRA questions. Suppliers that can produce unified evidence bundles covering model documentation, cybersecurity controls, SBOMs, and vulnerability handling have a competitive advantage. Suppliers that answer one and not the other will struggle.

How Safeguard.sh Helps

Safeguard.sh's Lino compliance handles both AI Act and CRA obligations in one view. SBOM lifecycle covers the component transparency both frameworks require, Griffin AI drives cybersecurity prioritization under Article 15 and CRA vulnerability handling, TPRM extends coverage to suppliers whose components end up in AI systems, and container self-healing addresses runtime patching discipline. Unified evidence, unified audit.

What does the high-risk system path look like in practice?

High-risk AI systems under Annex III include use cases in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice. Providers and deployers of these systems face the most detailed obligations in the AI Act, and the operational practice of meeting them is settling.

A high-risk system typically requires a documented risk management system (Article 9), data governance practices (Article 10), technical documentation (Article 11) that looks in part like an SBOM for AI, record keeping through logging (Article 12), transparency information for deployers (Article 13), human oversight design (Article 14), and accuracy, robustness, and cybersecurity provisions (Article 15). Conformity assessments apply before market placement.

What most providers have underestimated is record keeping. Article 12 requires logs that capture the system's lifecycle in sufficient detail to support the other obligations and any post-market investigation. Building the logging infrastructure to produce those records is non-trivial, and it is the area where early enforcement is most likely to find gaps.

How Safeguard.sh Helps

Safeguard.sh supports high-risk system evidence end to end. Lino compliance maps the Article 9-15 obligations to specific controls and artifacts, SBOM lifecycle produces the technical documentation under Article 11, record keeping is supported by continuous logging of dependency, provenance, and runtime events, and Griffin AI handles the cybersecurity obligations under Article 15. Providers ship high-risk systems with documented, auditable compliance.

How are providers of general-purpose AI models responding?

General-purpose AI model providers face their own track of obligations, including the technical documentation under Annex XI, the summary of training content under Article 53, and copyright compliance policies. The European AI Office has been actively engaging with model providers on the shape of acceptable documentation, and public Code of Practice work has provided a voluntary baseline that is influencing expectations.

Providers with significant systemic risk, which applies to the largest models above compute and capability thresholds, face additional obligations including model evaluations, adversarial testing, systemic risk assessments, and incident reporting. The number of providers subject to these additional obligations is small, but they include the most visible model developers.

The downstream effect on enterprises that deploy general-purpose models is significant. Deployer obligations, including the human-oversight and transparency requirements when deploying these models in high-risk contexts, depend on receiving adequate documentation from the model provider. Enterprise buyers should expect to evaluate that documentation as part of procurement and to re-evaluate as models update.

How Safeguard.sh Helps

Safeguard.sh's TPRM module evaluates general-purpose model providers using continuous evidence, including documentation completeness, incident handling practices, and supply chain posture. Lino compliance helps deployers demonstrate that they handled the obligations flowing from model use, and Griffin AI contributes AI-specific risk assessments. Enterprises treat model procurement as a supply chain decision with auditable evidence.

What enforcement patterns are emerging?

Three enforcement patterns are visible in early activity. The first is documentation-focused investigation. Regulators ask for the conformity documentation, risk management records, and technical documentation, and the quality of that paper determines whether the inquiry escalates. Programs with weak documentation face longer and more intrusive investigations even when underlying practices are acceptable.

The second is incident-triggered review. When an AI-related incident occurs, national authorities open investigations that can draw on AI Act obligations for the system in question. The pattern is similar to how GDPR investigations often begin with data-breach notifications. Enterprises should prepare for AI-related incidents to trigger AI Act inquiries even when the incident itself is not primarily an AI Act violation.

The third is cross-authority coordination. The AI Office, national authorities, data protection authorities, and consumer-protection regulators are increasingly coordinating on AI matters. An issue that starts in one forum may produce inquiries in others, and enterprises should expect integrated oversight rather than siloed compliance conversations.

How Safeguard.sh Helps

Safeguard.sh produces the documentation and evidence that survive regulator scrutiny. Lino compliance maintains continuous AI Act records, SBOM lifecycle and TPRM cover the supply chain evidence, Griffin AI tracks AI-specific risk over time, and container self-healing supports the operational controls that investigations examine. Enterprises respond to inquiries with data rather than narratives.

What should a compliance lead prioritize for the rest of 2026?

Three priorities matter most. First, complete the inventory of AI systems in scope. The obligations differ between prohibited practices, general-purpose models, high-risk systems, and minimal-risk systems, and the response depends on knowing which category each system falls into. Programs without current inventories are not ready for enforcement regardless of their other preparations.

Second, invest in continuous evidence generation. Documentation produced for a one-time conformity assessment will not survive contact with post-market surveillance. Build the pipelines that produce technical documentation, risk management records, and supplier evidence continuously, aligned with how the underlying AI systems change over time.

Third, align with the CRA and sector-specific regulations. The AI Act is not a standalone program; it overlaps with CRA, GDPR, DORA for finance, Medical Device Regulation for health, and others. Compliance architectures that integrate these frameworks are measurably more effective than parallel compliance silos, and the regulators themselves are moving toward integrated oversight.

How Safeguard.sh Helps

Safeguard.sh integrates AI Act compliance with CRA, GDPR, DORA, and sector frameworks through Lino compliance. Griffin AI, SBOM lifecycle, TPRM, 100-level dependency depth, and container self-healing provide the continuous evidence that 2026 enforcement expects. Compliance leads move from deadline-driven documentation to always-on readiness, which is the standard that early enforcement is already setting.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.