The EU AI Act entered its first full enforcement window in 2025, and by early 2026 there is enough activity on the record to write something other than speculation. The pattern of enforcement looks different from the one the most anxious vendor briefings predicted and different from the one the most aggressive advocacy groups hoped for. What has actually happened is a first year of cautious, documentary, process-heavy enforcement that has nonetheless reshaped how AI systems are developed and procured in Europe.
The Enforcement Architecture Is Working, Slowly
The AI Act's enforcement architecture is unusual. Obligations are shared between the AI Office at the EU level and national competent authorities in each member state, with market surveillance responsibilities distributed across sectoral regulators. Observers wondered, reasonably, whether this would produce coordinated action or paralysis. The first year has been somewhere between — less paralysis than feared, less coordination than hoped.
Where the architecture has worked, it has worked on clear obligations with documentable artifacts. The prohibited-practices list, technical documentation requirements for high-risk systems, and the general-purpose AI transparency obligations have all produced visible enforcement activity, because the regulator can ask for a document, inspect it, and make a finding on the spot. Where the architecture has struggled is on obligations that require contextual judgment — whether a specific deployment qualifies as high-risk, whether a given mitigation is adequate, whether a system's risk management is proportionate.
This is not a flaw unique to the AI Act; every horizontal regulation hits this problem. But it means the first year of enforcement has been heavily weighted toward the documentable obligations, and providers that invested in documentation have had an easier time than providers that invested in substantive controls without the corresponding paperwork.
What Regulators Actually Asked For
Across the enforcement activity visible through 2025 and early 2026, three patterns recur in what regulators asked for.
Technical documentation. This is the single most common enforcement artifact requested. Regulators want a coherent description of the system, its intended purpose, the data used to train and evaluate it, the risk management process, and the post-market monitoring plan. The level of detail expected has risen over the course of the year. Early-year documentation that might have passed inspection in spring 2025 would not pass in winter.
Conformity assessment records. For high-risk systems, the conformity assessment is a structured process that produces a record. Regulators have been scrutinizing the records closely, paying particular attention to whether the assessment was genuinely iterative — whether identified risks led to design changes — or whether it was a formality performed at the end of development.
Transparency disclosures for general-purpose AI. Providers above the compute threshold have been asked for training data summaries, copyright compliance documentation, and systemic risk assessments. The quality of responses has varied widely. Some providers published substantial, structured disclosures; others produced summaries so general that regulators sent follow-up requests almost immediately.
Fines in the first year have been modest by the standards of the Act's maximum penalties, but they have been real, and they have been published. The deterrent effect on other providers has been noticeable.
The Quiet Compliance Industry
A less visible but more consequential development has been the growth of the AI Act compliance industry. A year ago, AI Act compliance consulting was a niche practice within a handful of law firms. Today it is a substantial specialty with dedicated teams at every major firm, specialized boutiques, and a growing cohort of tooling vendors that market themselves specifically against AI Act obligations.
For enterprise buyers, this has two effects. First, the cost of compliance has come down as the work has become standardized; second-time providers are much more efficient than first-time ones. Second, the advice quality has also gone up, which means that providers cannot credibly claim they did not know what was expected. The "emerging regulation" defense, which had some weight in 2024, has essentially no weight in 2026.
How the Act Reshaped Development
For high-risk AI systems, the Act has shifted development practices in ways that are now visible in code review and engineering documentation.
Data governance documentation has moved from scattered notebooks and READMEs into structured, versioned records. Risk management activities are logged alongside engineering work, rather than produced retroactively for audits. Evaluation is no longer optional; even teams that resisted formal evaluation practices have been forced into them by the Act's requirements and by buyers who read the Act's language into their procurement clauses.
The downstream effect on the AI tooling market has been significant. Evaluation platforms, model governance tools, and model cards are areas where vendor activity has clearly accelerated, and much of that activity is responsive to AI Act obligations or to buyer behavior downstream of those obligations.
What the Act Has Not Done
Two things the Act has not done are worth naming, because both were widely predicted.
It has not driven frontier AI development out of Europe. Frontier labs continue to operate in Europe, work with European customers, and publish European-compliant models. There are complaints about compliance costs, and those complaints have merit, but the predicted exodus did not happen.
It has not reliably distinguished high-risk from low-risk deployments at the boundary. The Annex III categories are clear in the center and ambiguous at the edge, and the first year of enforcement has been almost entirely in the clear center. How national authorities will handle ambiguous cases remains an open question, and it will probably be the second-year story.
What to Expect in Year Two
Three developments are likely through the rest of 2026.
First, enforcement will move from documentary to substantive. Regulators have built their muscle on documentation; the next step is inspecting whether the documented controls actually work. This will be harder, more contested, and more interesting.
Second, the first cross-border coordinated enforcement actions will appear. National authorities have been building their capacity and sharing information; coordinated actions are the natural next step, and they will set precedent for how the distributed enforcement architecture actually behaves.
Third, the Act's interaction with sectoral regulation — financial services, health, critical infrastructure — will start producing friction. The first year saw relatively little overlap because enforcement was concentrated on horizontal obligations. The next year will include more deployments where AI Act obligations sit alongside sectoral ones, and the handoffs are not fully worked out.
The Practitioner Takeaway
For security teams advising AI programs, the practical implications of year one are straightforward. Technical documentation is not optional, and its quality is now a competitive differentiator in European markets. Evaluation practices need to produce documented artifacts, not just engineering intuition. Post-market monitoring is a real obligation, and systems that lack telemetry for post-market monitoring are systems that are not ready to ship in Europe.
The Act's first year did not produce the breakthrough rulings or the dramatic penalties that would have made for easy headlines. It produced something more durable — a baseline of documented, auditable practice that every serious AI provider in the European market now operates against. That baseline will shape the next decade of AI development globally, whether or not other jurisdictions adopt similar regimes, because the providers building to it will carry the practice with them.