Supply Chain Attacks

OSS Maintainer Account Takeover Trends 2025

A senior engineer's breakdown of how maintainer account takeovers evolved in 2025, from phishing kits targeting PyPI to session token theft on GitHub and npm.

Shadab Khan
Security Engineer
7 min read

Maintainer account takeover is the cleanest attack path in the entire open source supply chain. You do not need a zero-day, you do not need social engineering of a company's staff, and you do not need to compromise a build system. You compromise one person, push one version, and let dependency resolvers deliver your code to millions of machines. 2025 was the year this pattern stopped being opportunistic and became industrialized.

What made 2025 different from previous account takeover years?

Previous years saw isolated, high-profile account takeovers like ua-parser-js in 2021 or coa and rc shortly after. 2025 was different because the campaigns were multi-registry, automated, and persistent. Attackers ran phishing infrastructure that targeted hundreds of maintainers across npm, PyPI, and RubyGems within the same weeks. The Phylum and Socket threat intel teams documented overlapping indicators across registries, suggesting shared tooling or even shared operators.

The second shift was tempo. Historical takeovers produced a single malicious version that got reverted within hours. 2025 campaigns produced staggered publishes across multiple compromised accounts over days, with each new publish using slightly different obfuscation. That made signature-based detection nearly useless and forced defenders to rely on behavioral indicators.

The third shift, and the one most engineers underestimated, was that the phishing pages were credible. They rendered real-looking 2FA prompts, they mirrored current UI changes within days of the real registry pushing updates, and they captured TOTP codes in real time for immediate replay.

How did the PyPI phishing wave in mid-2025 actually work?

The PyPI campaign that peaked in June and July 2025 used typosquatted domains like pypj.org and pypi-login.com sent via direct email to maintainers listed in package metadata. The emails impersonated PyPI security team notifications warning about account review or policy updates. The landing page captured the username, password, and TOTP, then immediately proxied the credentials to the real PyPI to establish an authenticated session. From there, the attacker downloaded the user's API tokens and used them to publish new versions.

The technique itself is not novel, it is a standard adversary-in-the-middle pattern. What made it effective was operational discipline. Attackers did not immediately publish malware. They waited days to weeks, monitored the account for activity, and then pushed a single malicious version timed to a release window when the maintainer was likely to have recently published legitimately. That timing made the malicious publish look normal in release history.

PyPI responded by accelerating mandatory 2FA enforcement and by rolling out trusted publishers (OIDC-backed publishing from CI). Both are improvements. Neither fully addresses the pattern, because a compromised maintainer session can still trigger a legitimate CI publish.

Why did session token theft surpass credential phishing on npm?

By the second half of 2025, the dominant npm takeover vector was session token theft rather than password phishing. Attackers compromised maintainer developer machines through malicious VS Code extensions, trojanized CLI tools, and poisoned browser extensions, then exfiltrated the .npmrc tokens and the browser cookies for npmjs.com. With those in hand, there was no login page to phish, no 2FA to bypass. The attacker just presented a valid session.

This shift matters because the industry had invested heavily in 2FA as the primary defense. 2FA addresses password-based compromise. It does not address a stolen bearer token sitting in ~/.npmrc or a session cookie lifted from a developer's laptop. The attack surface quietly migrated from the login flow to the post-login artifacts, and most organizations were still writing policy against the old flow.

The mitigation is short-lived tokens and device-bound credentials. npm supports granular tokens with expiration, and sigstore-based provenance ties publishes to specific workflows. Organizations that enforced token scoping and workflow-only publishing in 2025 were largely untouched by this wave.

How did GitHub compromise feed into registry takeovers?

A pattern we observed repeatedly in 2025 was GitHub-to-registry pivot. Attackers compromised a maintainer's GitHub account, not to inject code directly (which would be reviewed), but to modify GitHub Actions workflows that published to npm or PyPI. A single-character change to an Actions workflow, pushed to a quiet branch and then merged through a fake second account owned by the same attacker, was enough to trigger a poisoned release.

GitHub compromise in 2025 leaned heavily on personal access tokens leaked through dependency chains. Developer machines running infected packages leaked GitHub tokens, which were used to push workflow changes on repositories where the developer was a maintainer. The chain was malicious npm package -> developer machine -> GitHub token -> workflow change -> poisoned release. Five hops, any of which a defender could interrupt if they were watching, but most were not.

What role did AI-generated phishing play in 2025 campaigns?

Generative AI did not create these campaigns, but it industrialized them. Phishing emails in 2024 often had telltale phrasing inconsistencies. 2025 phishing was indistinguishable from legitimate registry communication because attackers used LLMs to generate copy that matched the target registry's house style, often by scraping years of legitimate security announcements as training context.

More consequentially, LLMs were used to personalize phishing at scale. Instead of generic "your account needs review" messages, maintainers received emails referencing their recent package versions, recent issues filed, and even recent commit messages, all pulled from public GitHub and registry data. This dramatically raised click-through rates on the initial phishing page. Research from several vendor threat teams suggested click rates in 2025 were 3 to 5 times higher than equivalent 2023 campaigns.

What should organizations do differently in 2026?

Three changes matter. First, treat every dependency's maintainer as a potential compromise vector, not just your own developers. Track which of your dependencies are single-maintainer, and apply additional scrutiny, sandboxing, or pinning to those. Second, enforce trusted publishing wherever supported. PyPI, npm, and RubyGems all now offer workflow-based publishing that ties a release to a specific CI pipeline rather than a bearer token. Adopt it for your own packages and prefer dependencies that have adopted it. Third, monitor behavioral diffs across releases. A takeover release almost always introduces new network endpoints, new filesystem writes, or new child processes that did not exist in the previous version. That is a tractable signal.

For internal operations, rotate npm and PyPI tokens on a short cycle, scope them to specific packages, and never store long-lived publish tokens on developer laptops. Route publishes through CI exclusively, and require human approval on publish workflows for high-impact packages.

The single highest-leverage control for most organizations is simply knowing which of your dependencies are single-maintainer packages and applying extra scrutiny there. That one inventory step changes how you triage a takeover alert, because the question stops being "should I care" and becomes "how fast can I pin." It also reframes dependency selection: when choosing between a single-maintainer library and an equivalent library backed by a foundation or a multi-maintainer org, the multi-maintainer option deserves a presumption of stability.

Incident response planning is the other area that separates prepared organizations from unprepared ones. A takeover incident has a compressed timeline: minutes to hours between publication and widespread download, with revert times dependent on registry response. Teams that have rehearsed the steps (identify the affected package, pin to a known-good version across all repos, purge caches, notify downstream consumers) execute in minutes. Teams that are improvising take hours, and hours is enough for production exposure.

How Safeguard.sh Helps

Safeguard.sh addresses maintainer account takeover through behavioral diffing and provenance verification, not through trust assertions that an attacker can forge. Our AI-BOM continuously inventories every direct and transitive dependency, and Griffin AI flags versions whose runtime behavior, network destinations, or filesystem access diverges from the package's established pattern. We verify sigstore provenance and model signing/attestation where available, and our 100-level depth reachability analysis ensures a compromised dependency four layers deep is not overlooked because it is "transitive." Lino compliance enforces your policy on single-maintainer packages, trusted publisher requirements, and dependency freshness windows, and container self-healing lets you roll back to a known-good image when an upstream takeover makes it past your first line of defense. The result is that an industrialized phishing campaign stops being a crisis for your release pipeline.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.