The EU AI Act (Regulation (EU) 2024/1689), published in the Official Journal on 12 July 2024 and in force since 1 August 2024, is in the middle of its phased application in 2026. Prohibited practices and AI literacy obligations applied from 2 February 2025. General-purpose AI (GPAI) model rules applied from 2 August 2025. Most high-risk AI system obligations apply from 2 August 2026. For software engineering teams, this is not an abstract governance regime -- it has concrete implications for how you track components, document training data, secure model weights, and produce evidence for conformity assessments. This post walks through the EU AI Act software supply chain implications that teams building, integrating, or deploying AI systems need to address in 2026.
The Act treats AI systems as products with digital elements. That framing brings the Cyber Resilience Act (Regulation (EU) 2024/2847) into the picture, because high-risk AI systems that are also products with digital elements are subject to both regimes. Cybersecurity essential requirements under CRA Annex I become relevant for AI system providers, and the vulnerability handling obligations create specific supply chain demands.
What obligations in the EU AI Act apply in 2026?
The phased application is set in Article 113:
- 2 February 2025: Chapter I (general provisions) and Chapter II (prohibited AI practices).
- 2 August 2025: Chapter III Section 4 (notifying authorities), Chapter V (GPAI), Chapter VII (governance), Chapter XII (penalties except for GPAI providers), and Article 78 (confidentiality).
- 2 August 2026: Most provisions, including high-risk AI system obligations for Annex III systems.
- 2 August 2027: Article 6(1) obligations for high-risk AI systems embedded as safety components of products subject to Union harmonization legislation listed in Annex I.
In 2026 you are operating under the GPAI regime for model-level obligations and -- starting August -- the full high-risk AI obligations for Annex III systems (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice).
Who is a provider, a deployer, or an importer under the Act?
Provider (Article 3(3)) is the entity that develops or has developed an AI system or GPAI model and places it on the market or puts it into service under its own name. Deployer (Article 3(4)) is a person using an AI system under its authority. Importer and distributor roles are defined in Article 3(6)-(7).
The significance for supply chain: providers carry the primary obligations, but deployers carry Article 26 obligations including human oversight, monitoring, logging retention, and incident reporting. If you integrate a third-party GPAI model into your product and fine-tune it, you may become a provider under Article 25 (which covers modifications that result in the original AI system becoming high-risk). Your supply chain posture must support either role.
What does the Act require for GPAI model supply chain transparency?
Article 53 lays out obligations for all GPAI model providers: technical documentation (Annex XI), information and documentation for downstream AI providers (Annex XII), copyright policy, and a public summary of training data (using the Commission template).
Annex XII requires downstream provider documentation to include the modalities, intended tasks, acceptable use policies, technical means for integration, data used, and relevant technical measures. For downstream AI system providers, this is the supply chain artifact that substitutes for what SBOMs do in conventional software. Receiving and maintaining this documentation from every GPAI model you integrate is a 2026 operational requirement.
Article 55 adds obligations for GPAI models with systemic risk (computing threshold under Article 51 currently set at 10^25 FLOPs for training). These include model evaluations, adversarial testing, serious incident reporting, and cybersecurity protections.
How does Article 15 (accuracy, robustness, cybersecurity) shape supply chain controls?
Article 15 requires high-risk AI systems to be designed and developed to achieve appropriate accuracy, robustness, and cybersecurity. Subsection (5) specifically requires resilience against unauthorized third parties attempting to alter use, outputs, or performance by exploiting vulnerabilities. Technical measures addressing these vulnerabilities include data poisoning, model poisoning, model evasion, adversarial examples, and confidentiality attacks.
This is a supply chain problem. Training data poisoning attacks operate through the training data supply chain. Model poisoning attacks operate through model fine-tuning pipelines or pretrained model downloads. Addressing Article 15 cybersecurity requires inventory of data sources, model weights, training scripts, and fine-tuning dependencies -- an AI bill of materials (AIBOM) in practice, even if the Act itself does not use the term.
What documentation must a high-risk AI system carry?
Annex IV defines the technical documentation for high-risk AI systems. It includes the general description of the AI system, detailed description including methods and steps for development, description of data and data governance, validation and testing procedures, cybersecurity measures, and -- critically for supply chain -- the computational resources used for training, testing, and validation, and the methods used to develop the AI system "and, where relevant, the use of pre-trained systems or tools provided by third parties."
In practice the technical documentation must inventory pretrained models, datasets, libraries, and tooling. This aligns closely with SBOM and AIBOM practices. Commission guidance in late 2025 emphasized machine-readable technical documentation to support post-market monitoring and authority inspections.
How do CRA obligations interact with the AI Act?
The Cyber Resilience Act applies to products with digital elements placed on the EU market. Most obligations apply from 11 December 2027, with the Article 14 vulnerability notification obligations applying from 11 September 2026. For AI systems that are also products with digital elements, providers satisfy cybersecurity-related AI Act requirements by complying with CRA (Article 8 of the AI Act).
The CRA essential requirements include secure-by-design, secure-by-default, vulnerability handling, SBOM production (Annex I Part II), and coordinated vulnerability disclosure. The SBOM requirement is explicit in CRA: a product with digital elements must have an SBOM covering at minimum the top-level dependencies. For AI products, that SBOM extends to model weights, datasets, and training infrastructure components.
What logging and post-market obligations apply?
Article 12 requires high-risk AI systems to automatically record events (logs) over their lifetime. Article 26(6) requires deployers to retain logs for at least six months. Article 72 requires providers to establish a post-market monitoring system proportionate to the risks. Article 73 requires reporting of serious incidents to market surveillance authorities within specific windows (15 days for general serious incidents, 72 hours for widespread infringement of fundamental rights obligations, 2 days for fatalities).
Operational implication: your logging pipeline must survive supply chain compromise scenarios. Log tampering attacks via compromised logging libraries are a supply chain issue. Independent log integrity controls and SBOM-verified logging components are the baseline.
Who enforces the Act, and what are the penalties?
The AI Office sits within the European Commission and oversees GPAI model providers. National market surveillance authorities (designated by each Member State) enforce high-risk AI system obligations. The European Artificial Intelligence Board coordinates across Member States. The Scientific Panel advises the AI Office on GPAI systemic risks.
Penalties under Article 99 reach EUR 35 million or 7% of worldwide annual turnover for prohibited practice violations; EUR 15 million or 3% for obligations under most articles including GPAI; EUR 7.5 million or 1% for supplying incorrect information. These are higher ceilings than GDPR in absolute terms.
What should AI teams have in place by August 2026?
Practical priorities for a high-risk AI system provider:
- AIBOM inventory covering datasets (sources, licensing, integrity hashes), pretrained models (provenance, version, license), fine-tuning data, training scripts, and deployment runtime components.
- Supply chain integrity for model weights with signed attestations (Sigstore-compatible signatures) on model artifacts.
- Vulnerability monitoring for the classical software supply chain under the AI system (inference servers, orchestration, monitoring stacks) with CVE correlation.
- Data governance records (Article 10) demonstrating data source due diligence, bias analysis, and relevant dataset provenance.
- Conformity assessment readiness -- most Annex III high-risk AI systems allow internal control conformity assessments under Article 43, but some (biometrics, other Annex I-linked cases) require notified body involvement.
- Serious incident reporting playbook with technical triggers that connect AIBOM awareness to the 2/15/72-hour reporting windows.
- Downstream documentation packages if you are a GPAI provider (Annex XII) or an upstream component provider under Article 25(4).
How Safeguard.sh Helps
Safeguard.sh extends classical SBOM capabilities into the AI supply chain so teams building or deploying AI systems under the EU AI Act can produce the evidence the regulation demands. The platform generates AIBOMs covering datasets, pretrained models, fine-tuning pipelines, and inference stacks in CycloneDX 1.6 ML extensions and SPDX 3.0 AI Profile format. Model weight integrity is captured with signed attestations, and dataset provenance is traced to source URLs, licensing terms, and integrity hashes.
For high-risk AI system providers, Safeguard.sh produces Annex IV technical documentation excerpts, Annex XII downstream provider packs, and post-market monitoring inputs for Article 72. The platform's CVE and ML-specific vulnerability correlation covers both classical software dependencies and AI-specific issues (compromised model repositories, known-vulnerable pretrained models, license incompatibilities).
For GPAI providers, Safeguard.sh aligns with the Commission's public training data summary template and supports the Annex XI technical documentation requirements. For deployers satisfying Article 26 obligations, the platform generates logging integrity attestations and supports the six-month log retention demands with tamper-evident storage. Teams using Safeguard.sh meet the EU AI Act software supply chain expectations alongside CRA cybersecurity obligations from a single coherent control plane.