Regulation

EU AI Act Article 5: Prohibited Practices Now Enforceable

Article 5 of the EU AI Act became enforceable on 2 August 2025, with administrative fines up to €35 million or 7% of worldwide turnover for prohibited AI practices.

Shadab Khan
Security Engineer
8 min read

Regulation (EU) 2024/1689 — the EU AI Act — entered into force on 1 August 2024. Its first hard obligations, the Article 5 prohibitions on certain AI practices, became applicable on 2 February 2025 and became enforceable through Member State competent authorities and the European Data Protection Supervisor from 2 August 2025. The Commission published interpretative Guidelines on Prohibited AI Practices on 4 February 2025, and an AI literacy duty under Article 4 applied in parallel from 2 February. The Article 5 regime is the AI Act's most aggressive penalty tier: up to €35 million or 7% of worldwide annual turnover, whichever is higher.

What does Article 5 actually prohibit?

The Article carves out eight prohibited categories. The Commission's 4 February 2025 guidelines run to nearly 140 pages and clarify the scope of each. The categories, in summary:

  • AI systems that use subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting behaviour and causing significant harm.
  • AI systems that exploit vulnerabilities of a natural person or specific group due to age, disability, or socio-economic situation, with the same objective and harm threshold.
  • AI systems for social scoring by public authorities or on their behalf, where scoring leads to detrimental or unfavourable treatment unrelated to the original context of data collection, or disproportionate to behaviour.
  • AI systems for predictive policing of individuals based solely on profiling or personality traits (carve-outs apply for AI supporting human assessment based on objective facts directly linked to a criminal activity).
  • AI systems for untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases.
  • AI systems for inferring emotions of a natural person in the workplace and education contexts (medical or safety exceptions apply).
  • AI systems for biometric categorisation of natural persons to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (lawful labelling of biometric datasets and law enforcement carve-outs apply).
  • AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement, with narrow and strictly conditioned carve-outs for targeted search of victims of specific crimes, prevention of substantial and imminent threats, and identification of suspects of serious crimes.

Who is in scope?

The prohibitions apply to providers and deployers of AI systems placed on the market, put into service, or used in the Union, regardless of where the provider is established. The Act applies extraterritorially: a US-headquartered provider whose model is integrated into a product sold in the Union is bound by the prohibitions, as is a Union deployer using a third-country provider's API. Two categories of actor are typically caught by Article 5:

  • AI system providers, who develop and place the system on the market.
  • Deployers, who use AI under their own authority for non-personal purposes. Article 5 binds both, although the prohibitions are written most naturally about deployment.

The Commission guidelines clarify that "placing on the market" and "putting into service" include the offering of an AI system through an API, a software-as-a-service interface, or an integrated product. A foundation model offered as a downstream service from a third country is in scope.

How is enforcement structured?

The AI Act creates a layered enforcement architecture. For Article 5:

  • Member State competent authorities, designated under Article 70 of the Act, are the primary enforcers for AI systems placed on the market or used in their territory. National authorities include the German Federal Network Agency (Bundesnetzagentur), the Italian Garante for AI (via the GPDP/AGCOM-AGID consortium), and similar bodies in other Member States.
  • The European Data Protection Supervisor enforces Article 5 against Union institutions and bodies.
  • The AI Office in the Commission has direct enforcement powers over general-purpose AI models with systemic risk (Article 92) but plays a coordination and assistance role for Article 5.

The maximum administrative fine for Article 5 breaches under Article 99(3) is the highest in the Act: €35 million or 7% of total worldwide annual turnover for the preceding financial year, whichever is higher.

How are providers and deployers responding?

Three patterns are visible in late 2025:

  • Workplace and education emotion-inference systems are being withdrawn or substantially limited. Vendors including some attention-monitoring tools have either restricted deployment in EU markets or pivoted to engagement-metric framings that do not infer "emotion" within the Article 3 definition.
  • Untargeted facial-recognition database building has been formally discontinued by several scraping-based providers serving European customers, with public attestations and contractual carve-outs from existing deployments.
  • Predictive policing tools deployed by EU law enforcement are being reviewed against the Article 5(d) prohibition. The Commission guidelines clarify that systems supporting human investigators on objective evidence are not prohibited; profile-based individual prediction is.

The Commission's enforcement messaging has been measured. Authorities have signalled that the first year (August 2025 to August 2026) will focus on guidance, supervision, and clear-cut cases — particularly emotion inference in education and workplace settings, which is unambiguously prohibited and has been visibly deployed.

What about high-risk and GPAI obligations?

Article 5 prohibitions are the first hard obligations under the Act. The remaining timeline:

| Date | Obligation | |---|---| | 2 August 2025 | GPAI Article 53-55 obligations apply (transparency, copyright, systemic-risk safety for new models). | | 2 February 2026 | Codes of practice and Commission enforcement processes operational; AI Office staffed. | | 2 August 2026 | Enforcement actions against GPAI providers begin (the Commission's grace period closes). | | 2 August 2026 | High-risk system obligations under Article 6 and Annex III apply, including conformity assessment and post-market monitoring. | | 2 August 2027 | High-risk obligations for AI systems embedded in products covered by Union harmonisation legislation. | | 2 August 2027 | GPAI models on the market before 2 August 2025 must reach compliance. |

The General-Purpose AI Code of Practice, published 10 July 2025, structures the first 12 months of GPAI compliance for new models. Adherence is voluntary but creates a presumption of compliance with Article 53.

What does this mean for software supply chains?

AI is increasingly a software supply chain question. Three direct consequences for security teams:

  • AI-BOM expansion. Software components that integrate third-party models — foundation models, fine-tuned variants, classifiers — must be inventoried alongside conventional dependencies. Article 5 categorisations flow through the supply chain: a deployer is responsible for using the system within prohibited limits even if the provider sold it as "general purpose."
  • Provider attestations. Deployers will increasingly require contractual attestations from upstream model providers about Article 5 conformity, lawful basis for training data, and the integrity of model weights.
  • Reachability. The "intended purpose" of an AI system is what determines its classification under the Act. A model with general capabilities can be deployed in a high-risk or prohibited use case; reachability analysis of which model is wired into which user-facing feature is the practical control.

What should defenders do now?

Five steps cover most of the Article 5 readiness gap:

  • Inventory every AI system in production, distinguishing provider-developed from third-party-integrated, with the intended purpose explicitly stated.
  • Cross-check the inventory against Article 5 categories using the Commission's 4 February 2025 guidelines, with documented reasoning for any system that is close to a prohibition but argued out of it.
  • Withdraw or remediate clear-cut prohibitions: workplace and education emotion inference is the most visible category.
  • Update procurement contracts to require provider attestations about Article 5 conformity and lawful basis for training data.
  • Build the Article 4 AI literacy programme — staff operating AI systems are now required to have appropriate AI literacy under the Act.

How Safeguard Helps

Safeguard maintains an AI-BOM alongside conventional SBOMs, inventorying every model dependency including foundation models, embeddings, fine-tunes, and downstream classifiers, with the intended purpose flagged against Article 5 categories. Griffin AI reachability identifies which user-facing features wire to which AI model, surfacing whether an integrated system is in fact being used in an Article 5 prohibited context regardless of the provider's claimed intended purpose. TPRM workflows score AI model providers against their Article 5, Article 53, and Code of Practice positions, surfacing missing attestations before they cause procurement risk. Policy gates block model integrations whose providers cannot demonstrate Article 5 conformity or whose use case falls in the Annex III high-risk list without conformity assessment. The compliance automation module produces the audit trail that competent authorities are expected to demand once Article 6 high-risk obligations apply in August 2026.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.