npm's unpublish policy has been a ticking story since Azer Koçulu unpublished left-pad in March 2016 and broke, by some estimates, tens of thousands of downstream builds for a weekend. The registry's response, tightening unpublish rules to a 72-hour window with exceptions, has held up reasonably well against accidental breakage. What the 72-hour rule does not fully address is a smaller, more adversarial question: can an attacker reclaim an unpublished, deleted, or garbage-collected name and ship a new package under it that downstream systems will consume as though nothing changed?
2025 research from several independent groups says yes, with specific conditions, and that the attack class is broader than the classic repository-squatting story. This post reviews the research, the npm-specific mechanisms it exploits, the related attacks against other registries, and the defenses that actually work.
What Does npm's Lifecycle Actually Look Like?
An npm package passes through several states, each with different attacker-relevant rules. Published packages can be unpublished by the owner within 72 hours of publish, unless they have dependents above a threshold, in which case the unpublish requires npm support involvement. Unpublished packages cannot have their exact version-number reused by the same owner, to prevent the original left-pad scenario of an unpublish followed by a malicious republish of the same version. Package names are returned to availability after 24 hours in the typical case, earlier in some edge cases, and, if the registry's internal garbage collection has run, the name is fully available for claim by any new maintainer.
Tarballs themselves are retained in the registry's CDN cache for a period that is not fully public, but empirical testing by researchers in 2024 and 2025 shows they persist at predictable CDN URLs for some time after unpublish. Security advisories can be filed against unpublished packages, which appear in npm audit output for anyone who still has the tarball in their local cache.
What Is the Garbage Collection Abuse Primitive?
The primitive is this: a malicious actor registers a package name that was formerly owned by a legitimate publisher, deleted via unpublish or account closure, and now available again. If downstream projects still reference that name — because they have the package in their lockfile, because they restore from a cached tarball, or because a transitive dependency still lists it as a peer or optional dependency — a new publish under that name can reach those projects on their next install.
The 2025 research emphasizes three variants. First, the classic name-reclaim: a popular package is unpublished by its maintainer, and within hours an attacker claims the name. Second, the deprecated-rediscovery: a package is deprecated but not unpublished, the legitimate maintainer's npm token is compromised, and the attacker publishes a version with a revived semver range. Third, the scoped-package takeover: an npm organization is abandoned, its scope becomes reclaimable under specific administrative paths, and the attacker reclaims the scope to ship any package previously published under it.
Which Public Incidents Fit the Pattern?
The most cited recent incident is the chalk and debug compromise in September 2025, where Aikido Security, Socket, and others tracked a phishing campaign that obtained npm tokens for the maintainer of several extremely popular packages. The attacker published new versions of chalk, debug, and related packages containing a cryptocurrency-wallet-focused wiper payload. The incident was not a garbage-collection abuse in the strictest sense — the maintainer's account was still active — but the dependency graph impact was the same as a name-reclaim attack: downstream projects installing on their next update pulled the malicious version.
The broader attack surface the researchers highlight includes left-pad-style unpublish and reclaim, specifically the Python-world equivalents documented by Socket and JFrog through 2024 and 2025 against PyPI, RubyGems, and crates.io. The attack class is cross-registry.
The 2024 ctx and phpass incidents, in which the original maintainer lost control of the PyPI project to an attacker who socially engineered the domain registrar for the maintainer's contact email, are a reminder that registry operators generally federate trust to email providers in ways that create additional takeover paths.
What About Tarball Retention?
Tarball retention creates a secondary attack primitive. If a malicious tarball was briefly published and then unpublished, its contents may remain accessible at the registry's CDN URL for a period. 2025 research showed that in certain npm replica configurations, CDN-cached tarballs remained reachable for days after unpublish, with the registry metadata no longer listing the version but the tarball still serving on a direct GET. Automated systems that cache tarballs by URL rather than by version-number lookup can pick up the unpublished-but-cached tarball, and mirror it forward into offline build environments.
The defensive implication is that any organization running an internal mirror of npm should not trust the mirror to have received only currently-valid packages. The mirror's tarball store reflects whatever the public CDN served at the time of sync, including briefly-published malicious versions. An internal audit pipeline should re-validate tarballs against the authoritative registry's current metadata, not just against the mirror's local copy.
How Do Lockfiles Interact With the Attack?
Lockfiles are the saving grace for many projects and the undoing for others. A package-lock.json pins a specific version and tarball integrity hash. If the tarball's contents change under the same version number, the integrity hash fails and npm refuses the install. This defends effectively against the simplest variant, where an attacker tries to republish a different payload under a version number already consumed by downstream.
Lockfiles do not help when: the tarball is newly added to the graph (dependency upgrade, new feature pulls in a new transitive dependency, new dev team member generates a fresh lockfile). They also do not help when the integrity hash uses a weakened algorithm; npm has transitioned from sha1 to sha512, but historical lockfiles may still contain sha1 entries, and those are practically trivial to collide for a motivated attacker. And they do not help against optional peer dependencies, which many toolchains resolve at install time without strict lock enforcement.
The practical recommendation in 2026 remains: regenerate lockfiles against current registry state on every dependency change, treat any lockfile check-in that changes an integrity hash as security-sensitive and requiring review, and scan the lockfile against an advisory feed at every CI run.
What Role Does npm's Provenance Statement Play?
npm's provenance statement feature, launched in 2023 and expanded through 2024, enables packages built on GitHub Actions to include a Sigstore-backed attestation of their source repository and build workflow. A consumer can verify that a package came from a specific repo's main branch, via a specific workflow, and reject tarballs that don't attest. For garbage-collection abuse scenarios, provenance is a strong defense when it is enforced: an attacker who reclaims a name cannot re-create provenance attesting to the original repo unless they also control that repo.
Adoption is the challenge. OpenSSF's 2025 provenance-adoption survey showed fewer than 8% of top-1000 npm packages consistently publishing provenance. Consumer-side enforcement is rarer still; the tooling exists but is not wired into default npm install flows. For security-critical dependencies, building a provenance-enforcement policy into CI is the high-leverage fix.
Which Operational Controls Actually Hold Up?
The controls that I see working in production in 2026: internal mirrors with strict allow-lists and automatic advisory gates, so that a newly-published version cannot reach developers until a policy pass. Lockfile-commit review that flags integrity-hash changes as sensitive. Provenance enforcement at install time for the most critical dependencies, with a registered bypass process for the pre-provenance tail. Periodic re-validation of installed dependencies against current registry metadata, to detect cases where a package has been unpublished or its ownership has changed since install. Short-lived npm tokens with aggressive rotation, so that the chalk-style phishing scenario has a small window.
At the organization layer, treat every dependency you pull as a liability you accepted, not just a feature you added. The teams that came out of the 2024-2025 registry incidents cleanest were the ones that had inventory of what they depended on and on whom, and could answer the question "do we use this maintainer's packages" in minutes rather than days.
How Safeguard.sh Helps
npm garbage-collection abuse and the broader registry-trust story are reachability problems. Your dependency graph decides whether a newly reclaimed or repoisoned package actually reaches your running code, and that graph is rarely what the lockfile alone says. Safeguard.sh's reachability analysis traces 100 levels deep into your transitive dependencies and answers "does my code path actually execute this package" with specific call-graph evidence, so a reclaimed-name advisory against a package you import but never execute is correctly deprioritized while a reclaimed-name advisory against a function you call in your hot path is escalated with context.
Griffin AI correlates registry-level signals — unpublish events, maintainer-account changes, provenance gaps, advisory filings — against your exact inventory. Eagle provides continuous scanning that re-evaluates your dependency graph against current registry state on every scheduled cycle, catching the windows when an unpublish-and-reclaim has happened and your lockfile has not yet been regenerated. Our guardrails enforce provenance policies at install time and at CI time, blocking un-attested versions from reaching production for critical dependency tiers. Container self-healing rebuilds images against clean, current dependency state. For teams whose supply-chain posture depends on not being surprised by a reclaimed name, Safeguard.sh is the early-warning layer.