Trusted publishing on PyPI is one of the best things to happen to Python supply chain security in the last five years. It removed the long-lived API token that used to sit in a thousand GitHub repository secrets and replaced it with short-lived OIDC credentials tied to a specific workflow. The feature works. What does not always work is how teams configure it, and the failure modes are subtle enough that they survive code review.
What does PyPI trusted publishing actually replace?
Trusted publishing replaces the PYPI_API_TOKEN secret with an OIDC-based exchange. Your CI runner presents an identity token to PyPI, PyPI verifies the token against the configured publisher (a GitHub workflow, a GitLab pipeline, or a Google Cloud identity), and if everything matches, PyPI issues a short-lived upload token scoped to that specific project. The long-lived token goes away, and with it, the whole category of attacks where a leaked CI secret lets someone publish a malicious release of your package.
The important word in that description is "matches." A trusted publisher configuration is a tuple of (repository owner, repository name, workflow filename, and optionally an environment). If any of those four values drifts from what PyPI has on file, the publish fails. If all four match but the workflow is doing something you did not expect, the publish succeeds and you have a problem.
Why does the "environment" field matter more than people think?
The environment field is the most commonly skipped configuration and the most commonly exploited weakness. When you register a trusted publisher without an environment, any workflow matching the other three fields can publish. That includes a pull request from a first-time contributor if the workflow runs on pull_request triggers and references the publishing job by name. It also includes a fork that a maintainer accidentally merges.
Binding the publisher to a specific GitHub Actions environment, ideally one with required reviewers, narrows the attack surface dramatically. The environment acts as a second check: even if an attacker manages to trigger the publish workflow, they cannot actually reach the publish step without the environment's protection rules releasing the job. Every PyPI trusted publisher I configure now has an environment attached, and if a project cannot use environments for some reason, I push them to use tag-protected workflows instead.
How do reusable workflows break trusted publishing in surprising ways?
Reusable workflows inherit OIDC tokens from the caller, not from the workflow file that contains the publish step. PyPI's publisher matching, however, is done against the called workflow's filename. If you have a reusable publish.yml that lives in a shared repository and is invoked from dozens of projects, your PyPI trusted publisher configuration has to name the shared workflow's repository and path, not the calling project's.
This surprises everyone the first time they hit it, and the error message from PyPI when it fails is not especially helpful. The fix is to either register the shared workflow as the publisher (and accept that anyone who can call it can publish to your project, which is usually fine inside a single organization) or to inline the publish step into each project's own workflow file. I generally recommend the latter for high-value packages because it keeps the publish-critical code auditable from a single commit history.
What happens when you rename your repository or workflow file?
Trusted publishing breaks silently the next time you try to release. PyPI matches on exact repository and workflow names, and there is no "previous names" list. A repository rename, a workflow rename, or an org migration will invalidate every trusted publisher configured against the old identity. The broken configurations do not get removed; they just stop matching.
The operational consequence is that you find out about the problem when you try to ship, usually at a moment when you want to be shipping rather than debugging CI. The fix is mechanical: log into PyPI, update the publisher to the new repository and workflow names, and retry. The preventative measure is a checklist entry on any repository rename or org migration that explicitly mentions PyPI configuration. A depressing number of maintainers have a disaster story that starts with "we moved our repo to a new org."
How should you think about dual publishing to PyPI and TestPyPI?
Dual-targeting with trusted publishers is a common stumbling block. TestPyPI requires its own publisher configuration, and the OIDC audience differs from production PyPI. Teams that blindly copy their production workflow to a test-release workflow end up with a publisher that matches neither audience correctly and fails in ways that look like generic auth errors.
The cleaner pattern is one workflow per target with its own publisher registration. If that feels like too much configuration, a single workflow with a matrix job is fine, but make sure the OIDC audience is set explicitly for each target rather than relying on defaults. The default audience has changed once already in the feature's history, and code that assumed the default would stay the same quietly broke.
What about attestations and provenance alongside trusted publishing?
PyPI started attaching Sigstore-based attestations to uploads published via trusted publishing in 2024, and adoption accelerated through 2025. In 2026 it is a standard feature; most packages published through the trusted path now carry a verifiable attestation. The nuance is that the attestation's subject is the distribution file, not the source tree, and the workflow identity it captures is only as trustworthy as your publisher configuration.
If your publisher is misconfigured to match an overly broad set of workflows, your attestations will still verify cryptographically, but they will be attesting to builds that should not have been allowed to publish. The attestation layer does not rescue a bad policy configuration; it records it. This is a recurring theme in modern supply chain security: the cryptography works, and the human-readable policies are where the bugs live.
How do you verify trusted publishing on the consumer side?
The pip client has had verification support for uploaded attestations since late 2024, but it is not on by default and probably will not be for another cycle. In the meantime, the practical pattern is to use pip install --require-hashes with a locked requirements file, and to build tooling that cross-checks the hashes in your lockfile against the attested artifact metadata PyPI exposes. This gives you a reasonable guarantee that the package you are installing is the same one that came out of the attested build, without betting your CI on pip's still-maturing verification path.
Internal mirrors and caching proxies add another wrinkle. Tools like devpi and Artifactory can be configured to pass through attestation metadata, but the default configurations in older installations strip it. Audit your mirror configuration specifically for attestation handling; it is almost never obvious from the main configuration UI.
How Safeguard.sh Helps
Safeguard.sh validates PyPI trusted publisher configurations across your organization, flags publishers missing environment scoping, and alerts when a repository rename or org migration has invalidated an existing publisher before your next release fails. We correlate uploaded attestations with your known-good workflow identities and raise a flag when a package suddenly starts publishing from a different builder. The result is trusted publishing that actually behaves like a policy boundary rather than a checkbox that once was green.