Supply Chain

Inside PyPI Project Quarantine: How the Reversible Takedown Workflow Has Performed Since Launch

PyPI's Project Quarantine status, introduced in August 2024 and used roughly 140 times in its first year, replaces irreversible deletions with a reversible hidden state. Here is how the workflow operates and how to consume the signal.

Nayan Dey
Supply Chain Security Lead
6 min read

When the PyPI security and admin team published the original Project Quarantine announcement in August 2024, they were trying to solve a specific operational problem: the prior toolbox of options when a malicious project was reported was effectively "do nothing" or "delete completely." Deletion was disruptive, irreversible, and complicated the namespace because the project name was either gone or reusable in ways nobody wanted. Quarantine introduces a reversible lifecycle status with no install path, no public visibility, and the project owner locked out of modifications while administrators investigate. By the one-year mark in mid-2025, PyPI admins had quarantined roughly 140 projects, with the cadence accelerating during the November 2025 Shai-Hulud spillover into PyPI credentials. This post walks through the workflow from a defender's perspective: how it operates, what signal it produces, and how downstream consumers should treat it.

What did PyPI actually change in its takedown toolbox?

PyPI added a quarantined value to its project lifecycle states. A quarantined project is hidden from the Simple Index that pip and other installers consult, hidden from search and the project list pages, and uninstallable through the normal index endpoints. The project owner cannot publish new versions, edit metadata, or transfer ownership. The status is visible to PyPI administrators, the project owner (so they can see why their project disappeared), and security researchers who have been granted appropriate access. Crucially, the action is reversible: an admin can clear quarantine, re-quarantine, or escalate to full removal. Before Quarantine, the only fast option was to fully remove the project, which then required a name-reservation step to prevent a different bad actor from claiming the freed namespace. Now the namespace stays held while investigation finishes.

How is a project actually placed into quarantine?

Quarantine is currently administrator-initiated rather than fully automated, though the PyPI team has publicly described an automation roadmap. The triggers are usually a credible malware report through the security@pypi.org address, a third-party advisory from a scanning vendor like Phylum, ReversingLabs, or Socket, or an OpenSSF malicious-packages PR. An administrator reviews the report, can immediately quarantine, then opens an internal investigation track. If the report turns out to be a false positive, the project is cleared, which is the workflow's main improvement over the deletion path. If the malicious nature is confirmed, the project transitions from quarantined to removed, and the project name is added to the reserved-name list so the slot cannot be reclaimed by the same threat actor. The PyPI blog notes that this lifecycle is described in their public administrator documentation; consumers should not expect a SLA, but reported takedown windows during 2025 incidents typically ranged from minutes for clear-cut cases to several hours for ambiguous reports.

What public signal does Quarantine produce for downstream consumers?

There are three useful signals. The first is index disappearance: a previously published project simply stops appearing in https://pypi.org/simple/<name>/, which an installer cache or air-gapped mirror will eventually pick up. The second is the project page state, which displays a clear notice that the project has been quarantined. The third, and most actionable for automation, is the PyPI JSON API, which returns metadata that downstream tooling can poll. Combined with the OpenSSF malicious-packages repository, which lists confirmed-bad packages across ecosystems, defenders have a feed they can subscribe to. For mirror operators, this gives a stronger story than the old "deletion replicated via metadata diff" model because the quarantine state itself is replicable.

How do you verify and react in CI?

Two practical checks slot into a typical Python CI pipeline. The first is a freshness check against the OpenSSF malicious-packages feed before resolving dependencies, ensuring no pinned version in your lockfile has been listed since the last scan.

# Fetch the latest OpenSSF malicious packages list and fail the build
# if any pinned dependency appears
git clone --depth=1 https://github.com/ossf/malicious-packages /tmp/mal
python tools/check_lock_against_feed.py \
  --feed /tmp/mal \
  --lock poetry.lock \
  --ecosystem pypi

The second is a stricter resolver mode that refuses to install any package whose index metadata is missing or whose project page returns a non-active state. The pip flag --require-hashes does not handle this directly, but pip-audit plus the --strict mode will fail closed when a previously installed name is no longer resolvable. For operators of internal PyPI mirrors, the Bandersnatch project added support in 2024 for honoring upstream Quarantine states, so a mirror that had been pulling a now-quarantined package will stop offering it on its next sync.

pip-audit --strict --requirement requirements.txt

What policy gate catches this class of issue going forward?

The defender goal here is to make "package suddenly missing from PyPI" automatically gate a deploy rather than a quiet runtime failure. Three policy patterns close the loop. Gate one is "fail closed when the resolver cannot find a previously pinned package," because that almost always signals either quarantine or removal, both of which are events you want a human to acknowledge before the next deploy. Gate two is "subscribe to the OpenSSF malicious-packages feed and block CI on any new PyPI entry whose project name appears anywhere in transitive dependencies." Gate three is "delay installs of new versions of high-blast-radius packages for a soak window," paralleling the npm Shai-Hulud lesson; PyPI's typical quarantine response window of minutes-to-hours means a 24-hour soak gives the registry's own response a chance to fire before the new version reaches your build.

What is the PyPI team's planned next step?

The blog post and the subsequent November 2025 Shai-Hulud advisory both describe an evolution toward automatable quarantine. The proposal is that a project accumulating multiple credible malware reports from a defined set of trusted scanners could be auto-quarantined pending admin review, instead of relying on an admin's interactive decision for each case. This is a careful design problem: false positives would silently break legitimate projects, and an attacker who can submit reports could weaponize the system against rivals. The PyPI team has been explicit that the trust list for auto-triggering will be small and that ambiguous cases will continue to go through human review.

How Safeguard Helps

Safeguard's malicious-package feed ingests PyPI's quarantine and removal signals, the OpenSSF malicious-packages repository, and a curated set of commercial threat-intel feeds, then cross-references them against every Python project in your tenant in real time. When a transitive dependency is quarantined upstream, Safeguard raises a finding tied to every product that depends on it and shows the path from your lockfile to the affected package within minutes. Policy gates can be configured to fail builds on quarantine, fail on resolver disappearance even before a public flag, or enforce a soak window for new versions of selected critical packages. The provenance verification engine consumes the PEP 740 attestations PyPI now requires for Trusted Publishing, giving downstream consumers the same chain-of-custody story they already get for npm, so when the next cross-ecosystem worm spills credentials into PyPI, the response window is measured in minutes inside your pipeline rather than the day-plus often seen in 2025.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.