Policy

GPAI Code of Practice: The 2025 Signatory Landscape

The General-Purpose AI Code of Practice was published on 10 July 2025 with three chapters. Most major providers signed, with notable partial signatures from xAI.

Nayan Dey
Senior Security Engineer
6 min read

On 10 July 2025 the European AI Office published the final General-Purpose AI Code of Practice, the voluntary instrument intended to evidence compliance with the AI Act's Chapter V obligations for providers of general-purpose AI (GPAI) models. The Commission and the AI Board subsequently confirmed the Code as an "adequate" voluntary tool, and the AI Office invited providers to sign before the GPAI obligations entered into application on 2 August 2025. The first list of signatories was published on 1 August 2025. The Code is now the most concrete operational framework for Articles 53 and 55 of Regulation (EU) 2024/1689, and the signing patterns from major providers reveal where compliance burdens land hardest.

What does the Code cover?

The Code is structured in three chapters, each addressing a distinct strand of obligation under the AI Act. Chapter I — Transparency — applies to every GPAI model provider. It operationalises Article 53 by detailing the documentation a provider must maintain about training process, intended use, energy use, and known limitations, and includes the Model Documentation Form that signatories must complete and keep up to date. Chapter II — Copyright — also applies to every GPAI model provider. It addresses Article 53(1)(c) by requiring providers to put in place a copyright policy, respect rights-holder opt-outs under Article 4(3) of the Copyright Directive, and document mitigations against reproducing copyrighted training material at inference time. Chapter III — Safety and Security — applies only to providers of GPAI models with systemic risk, currently defined as those exceeding the 10^25 cumulative training FLOP threshold under Article 51. This chapter is where the substantive cybersecurity, model evaluation, and incident reporting commitments sit.

Who has signed?

The signatory list published on 1 August 2025 includes most major foundation model providers. Google, OpenAI, Anthropic, Microsoft, Mistral AI, Amazon, IBM, and Cohere all signed across the chapters relevant to their models. xAI signed only Chapter III on Safety and Security, declining the transparency and copyright chapters. The consequence is real: xAI must still demonstrate compliance with the AI Act's transparency and copyright obligations, but through alternative adequate means rather than the streamlined Code route. Meta Platforms publicly declined to sign at launch, citing concerns about the scope of obligations relative to the Act's underlying text. Smaller European providers including Aleph Alpha and several open-weight model maintainers signed for transparency and copyright, where the bar is most operationally relevant for their distribution model.

What is the FLOP threshold and why does it matter?

Article 51(2) of the AI Act presumes systemic risk where the cumulative compute used for training a GPAI model exceeds 10^25 floating-point operations. The AI Office can override this presumption in either direction. As of the August 2025 entry into application, the publicly identified candidates above the threshold include OpenAI's GPT-4 and GPT-5 series, Google DeepMind's Gemini family, Anthropic's Claude 3 Opus and later, Meta's Llama 3.1 405B and successors, and xAI's Grok-2 and -3. Inclusion above the threshold triggers the additional obligations of Article 55: state-of-the-art model evaluations including adversarial testing, systemic risk assessment, serious incident reporting to the AI Office, and adequate cybersecurity protection for the model and its physical infrastructure.

What are the timelines?

The transparency and copyright obligations under Chapter V applied to all new GPAI models placed on the market from 2 August 2025. For models that were already on the market before that date, the deadline is 2 August 2027. The AI Office has been clear that informal collaboration during the first year takes priority over enforcement; formal enforcement powers — requests for information, model access, recalls, and fines — apply from 2 August 2026. Fines for breaches of Chapter V obligations can reach up to 15 million EUR or 3% of total worldwide annual turnover, whichever is higher, per Article 101. The AI Office can also order corrective measures and ultimately withdrawal from the Union market.

What does signing actually commit a provider to?

Three operational commitments deserve highlighting. First, the Model Documentation Form must be maintained throughout the model lifecycle and updated within 60 days of material changes. Providers commit to making this documentation available to the AI Office and to downstream deployers on request. Second, for Chapter III signatories, a Safety and Security Framework must be in place that includes pre-deployment evaluations against catastrophic risk categories (CBRN uplift, offensive cyber capability, loss of human oversight, and a few others), red-team testing, and a defined update cycle. Third, providers must report serious incidents to the AI Office. The Code defines this in alignment with the broader AI Act Article 73 regime, with thresholds keyed to large-scale harm, critical infrastructure disruption, and fundamental rights infringements.

| Chapter | Applies to | Key commitment | Operational artefact | |---------|-----------|----------------|----------------------| | I Transparency | All GPAI providers | Maintain model documentation | Model Documentation Form | | II Copyright | All GPAI providers | Respect Article 4(3) opt-outs | Copyright policy and crawler controls | | III Safety & Security | Systemic-risk GPAI only | Pre-deployment evaluations, incident reporting | Safety and Security Framework |

What happens to non-signatories?

Declining to sign the Code is not in itself a breach of the AI Act. Non-signatories must still comply with Chapter V obligations by alternative means and must demonstrate that those means are at least as effective as the Code. In practice, the burden of proof sits with the provider: the AI Office's request for information powers under Article 91 can demand documentation showing how Article 53 and 55 are satisfied. Where a signatory benefits from a structured presumption of compliance, a non-signatory will need to construct equivalent evidence from scratch — meaning the operational cost of declining the Code is real even if the legal posture appears neutral.

How Safeguard Helps

Safeguard's AI-BOM module captures the supply chain provenance of GPAI models and their downstream integrations, mapping each model to its provider, version, training cutoff, and Code of Practice signing status — which becomes a procurement filter when downstream deployers vet upstream GPAI dependencies. The platform tracks the Model Documentation Form fields against your AI inventory so internal governance reviewers can flag undocumented model uses ahead of an AI Office request for information. For systemic-risk obligations, Safeguard policy gates can block deployments that lack an attached evaluation report or that fall outside an approved release window after a material change. Incident reporting integrations align with both Article 73 of the AI Act and Article 14 of the Cyber Resilience Act, so a single report taxonomy feeds two regulatory channels without duplicate triage.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.