GitHub Actions' built-in cache is one of those features that looks like an optimization and turns out to be a trust boundary nobody drew. Teams cache npm installs, Go module downloads, Docker build layers, Rust target directories, and bazel output caches to shave minutes off pipelines. In 2025 a series of public disclosures reframed that cache as a durable, cross-workflow, cross-branch storage surface that an attacker with minimal privileges can poison. The result is a new attack class: convince any workflow to write a malicious cache entry, and watch it execute in a higher-privilege workflow some number of minutes later.
This post consolidates the public research on GitHub Actions cache poisoning as of early 2026, describes the primitives that make it work, and walks through the remediation patterns that have held up in practice.
What Is the Cache Poisoning Primitive?
GitHub Actions' actions/cache action uploads a keyed tarball to the repository's cache storage. The key is derived from a cache-key expression in the workflow, commonly including an OS identifier and a hash of lockfiles. On the next workflow run, actions/cache looks up the key and restores the tarball before subsequent steps run. The cache is scoped per repository and, within a repository, has access rules that GitHub describes in its documentation as: a workflow triggered on a branch can read caches from its branch, the default branch, and the base branch for pull requests.
The primitive is this: if a workflow runs in a less-trusted context and is permitted to write a cache key that a more-trusted workflow will later read, you have a poisoning channel. The less-trusted writer can place a tampered dependency, a modified binary, or a malicious action into the cache, and the more-trusted reader will pick it up. Because cache reads happen very early in a workflow, before most policy checks, the tampered content can execute with the full authority of the victim workflow.
Adnan Khan, Praetorian, and the Palo Alto Networks Unit 42 team all published 2025 research tracing specific variants of this pattern. The core insight across their work is that GitHub's cache scoping was designed for performance, not security, and several of the scoping rules allow cross-branch cache inheritance in ways that undermine isolation.
Which Variant Was CVE-2024-50338 Adjacent?
CVE-2024-50338 itself was a GitHub Actions toolkit vulnerability affecting how cache keys were derived under certain conditions. The adjacent public discussion in early 2025 expanded into the broader class. One canonical variant: a repository's default branch workflow builds a cache under key node-modules-${{ hashFiles('package-lock.json') }}. A pull request from a contributor modifies package-lock.json minimally and submits a PR. The PR workflow, running in pull_request context, is permitted to write a cache with that key because the PR's base is the default branch. The PR workflow installs dependencies, and along the way writes any file it wants into node_modules before caching. The default branch's next run picks up the poisoned cache.
GitHub hardened cache scoping for pull requests in several updates across 2024 and 2025, culminating in the March 2025 changes to the default cache-read scope for pull_request-triggered workflows. Those changes materially reduce the naive variant but do not eliminate the attack class. Workflows that use the same cache key for push and pull_request events, or that use pull_request_target, remain exposed.
What Does a Realistic Exploit Look Like?
The end-to-end flow in the 2025 Praetorian disclosure: repository A has a release workflow triggered on tag creation. The release workflow builds a binary, signs it, and publishes to a package registry. The release workflow restores a cache of the Rust target directory to speed up compilation. The cache key is stable across release tags because the dependencies rarely change. A contributor's pull request workflow runs the same build steps to validate the PR and writes to the same cache key. The contributor opens a PR that, as a side effect of an innocuous-looking change, writes a modified object file into the target directory before the cache step runs. The next release tag restores the cache, recompiles only the top-level crate, links against the poisoned object file, and publishes a backdoored signed binary.
The detection difficulty is that the release workflow looks normal. It builds from a legitimate tag, the source commit is what it claims, the artifact is signed by the right key. Only the cache intermediate is tainted, and caches are ephemeral storage nobody audits. SLSA attestations that cover only source-to-build provenance miss this entirely.
Why Don't Standard Signing Schemes Catch It?
Sigstore, cosign, in-toto, and SLSA provenance all address a specific question: did the binary I'm holding come from the source I think it came from, via the build platform I think it came from? Cache poisoning says: yes, the binary came from that source and that build platform, but one of the build platform's intermediate stages was fed tampered content. The provenance attestation is technically correct and practically useless, because the build process itself is deterministic only with respect to pristine inputs.
SLSA version 1.0 introduces the concept of hermetic builds and build-environment identifiers at Level 3, which partially addresses this by requiring the build to declare every input, including intermediate caches, in its provenance. A fully hermetic build cannot poison itself through an intermediate cache because it re-derives everything from declared inputs. In practice very few projects outside of Google-scale monorepos run fully hermetic builds, and partial hermeticity frequently leaves cache paths outside the attestation envelope.
Which Other Cache Layers Share the Shape?
GitHub Actions cache is the most accessible example, but the attack shape generalizes to any CI/CD cache that is writable from one trust context and readable from another. BuildKit's registry cache, when pushed to a shared registry with lax permissions, has the same property. GitLab CI's distributed cache, especially when configured with S3 backends that multiple projects share, is another. Turborepo remote caches, Nx Cloud, Bazel remote caches without per-user authorization, and sccache installations with S3 backends have all produced at least one 2025 disclosure of a similar shape.
The Pulumi team disclosed in mid-2025 a variant affecting their own infrastructure where a shared build cache between two internal pipelines allowed test-pipeline output to influence a release-pipeline binary. That disclosure, published as a post-mortem rather than a CVE, is a useful artifact because it shows how the attack class appears inside mature organizations, not just in open-source CI.
How Do You Detect Existing Poisoning?
Detection is difficult because caches are high-volume, ephemeral, and not commonly logged in a security context. The approaches that work: enforce cache-key schemas that bind the cache to a specific workflow and trigger, so that PR caches never collide with release caches. Hash the complete cache tarball post-restore and compare to a pinned expected hash for critical pipelines, treating mismatches as failures rather than refreshes. Run release pipelines with cache disabled or with a separate, trust-scoped cache namespace that only release workflows can write to. Use SLSA Level 3 build platforms for any pipeline producing a production artifact.
Retrospective detection is harder. If you suspect poisoning occurred, the data sources are limited: GitHub's cache usage API shows which workflows read and wrote which cache keys, but not what the cache content was. The practical recommendation is to rotate any signing key exposed to a potentially poisoned pipeline and to rebuild affected artifacts from scratch with cache disabled, then compare.
What Hardening Patterns Are Teams Actually Using?
The patterns I see holding up under production load in 2026: separate caches by trigger, with naming conventions like prod-cache-v1-* reserved for release pipelines and pr-cache-v1-* for PR validation, enforced via workflow templates. No cross-trigger cache reuse, even if it costs compilation time. Cache keys that include a workflow identifier, not just dependency hashes. For critical builds, no cache at all, accepting the slower pipeline as the cost of provenance. For projects that must use caches, a periodic cache purge via the GitHub API to bound the window any single poisoning can survive.
Organizationally, treat the cache tier as part of the build platform, not as a convenience of the runner. That means its configuration goes into version-controlled policy, not into individual workflow files, and changes to cache strategy go through the same review as changes to deployment targets.
How Safeguard.sh Helps
Cache poisoning is a reachability problem in disguise. The tampered content reaches your production artifact through an intermediate that your build attestation may not cover. Safeguard.sh's reachability analysis follows dependencies 100 levels deep into the graph and flags the actions, caches, and artifacts that your release pipelines actually consume, surfacing the real attack surface rather than the declared one. Griffin AI prioritizes CI/CD advisories based on whether your specific workflows and cache keys fit the disclosed exploitation pattern, cutting through advisory noise.
Eagle watches for new disclosures in the cache-poisoning class and enforces guardrails before the next pipeline run: blocking cross-trigger cache reuse, pinning marketplace actions to commit SHAs, and flagging workflows that read caches writable from pull_request contexts. Our continuous scanning catches drift when a cache configuration moves away from the hardened template. Container self-healing extends the same logic to any containerized build steps, automatically swapping base images with known bad caches. For teams whose build pipelines are the actual production path, Safeguard.sh treats the cache tier as a first-class trust boundary.