When the September and November 2025 Shai-Hulud waves hit npm, a quiet operational fact became uncomfortably visible: most enterprises do not pull directly from registry.npmjs.org. They pull through an internal mirror, typically Sonatype Nexus Repository, JFrog Artifactory, GitHub Packages, or one of a small set of cloud-native artifact stores. When the upstream registry quarantined the malicious versions, the mirrors continued to serve cached copies until the next metadata sync. This pass-through gap, measured in hours during the September wave, became one of the most talked-about hardening priorities for 2026. This post walks through what Sonatype, JFrog, and Harness shipped through late 2025 and early 2026 to close it, and what defender configuration matters most.
What did the private registry vendors change?
Three vendor shifts stand out. First, Sonatype Nexus Firewall continued its long-running policy-engine investment with explicit support for the OpenSSF malicious-packages feed, real-time blocking based on upstream quarantine signals, and an automated-quarantine state that mirrors what PyPI introduced server-side. Nexus Firewall integrates with both Nexus Repository and JFrog Artifactory, so organizations that standardized on JFrog can still consume Sonatype's malicious-package intelligence. Second, JFrog's Curation product, which sits in front of Artifactory's remote repositories, added blocking based on threat-intel feeds, vulnerability thresholds, and license policies with per-tenant rule sets. Third, Harness Artifact Registry, launched in late 2024 and matured through 2025, shipped supply-chain governance features including SBOM-time signing, provenance verification, and policy-driven blocking that align with the OpenSSF principles document.
How was the response coordinated between vendors and the upstream registries?
The November Shai-Hulud wave produced an unusually rich set of vendor coordination examples. Sonatype's threat-research team and JFrog's security research both contributed IOCs to the OpenSSF malicious-packages repository within hours of detection, and both vendors published their threat-intel updates as feeds that their customers' internal registries consumed automatically. Cloudsmith published a multi-ecosystem detection write-up covering simultaneous compromises across PyPI, npm, and RubyGems. The OpenSSF Securing Software Repositories WG ran a coordination call during the November wave to align messaging between registry operators and the commercial mirror vendors. The defender-relevant pattern is that the private-registry vendors now function as a real-time replication layer for upstream takedown decisions rather than as a passive cache.
What signals are now visible inside a hardened private registry?
Four signals are useful for defenders. The first is the quarantine state on individual versions, mirrored from the upstream registry where applicable, with mirror-local policy that can extend or override the upstream decision. The second is the provenance attestation pass-through: both Sonatype and JFrog updated their remote-repository handlers in 2025 to preserve Sigstore attestations from upstream so internal consumers see the same provenance signal an external npm audit signatures would surface. The third is the policy evaluation log, which records every block decision and the rule that fired, giving defenders an audit trail when investigating "why did this build fail?" The fourth is the SBOM-generation step, integrated into both Nexus and Artifactory's pipelines, which records the full set of artifacts a build consumed from the mirror for retrospective analysis after an incident.
How do you configure a private registry to actually use these features?
The high-leverage configuration sits in three places: the upstream-feed integration, the per-repository policy, and the consumer-side hash and signature verification. The exact UI varies by vendor, but the policy intent is the same.
# Sonatype Nexus Firewall, example npm policy (illustrative)
policy "block-malicious-npm" {
format = "npm"
source = "ossf-malicious-packages"
action = "quarantine-on-match"
notify = ["secops@example.com"]
}
policy "require-provenance-tier1" {
format = "npm"
applies-to = "package in tier1-allowlist.txt"
require = "sigstore-attestation:present"
action = "block-if-missing"
}
For JFrog Curation, equivalent rules apply through the Curation policy editor with feeds for malicious packages, CVE thresholds, and license policies. On the consumer side, the npm CLI's npm audit signatures and Sigstore's cosign verify-blob-attestation work against the mirrored artifacts identically to direct registry pulls, which means the existing CI verification scripts do not need rewriting; they just point at the mirror.
# Verify mirror-served provenance the same way as direct registry
npm config set registry https://nexus.internal.example.com/repository/npm-proxy/
npm install --ignore-scripts
npm audit signatures
What policy gate catches the next mirror pass-through gap?
Three gates close the residual risk. Gate one is "subscribe the mirror to the OpenSSF malicious-packages feed with a configured maximum staleness," ensuring the gap between upstream quarantine and mirror block is bounded in minutes rather than hours. Gate two is "block any artifact whose upstream version was quarantined within a configured lookback window, regardless of local cache state," so a build that hits a cached copy after upstream takedown still gets the new policy answer. Gate three is "preserve provenance attestations through the mirror and require valid attestations for tier-one consumers," which gives internal consumers the same signal external direct-pull users get. The combination collapses the mirror pass-through window for typical Shai-Hulud-class waves from hours to minutes.
What did the 2025 incidents teach about audit and forensics?
The post-incident retrospectives published by AWS, OpenAI, Elastic, and pnpm all share a common refrain: "we needed to know who pulled which version when." Without complete mirror logs, identifying affected builds after an incident is harder than it should be. Both Sonatype and JFrog responded by hardening their access logging and SBOM-trace features in late 2025, making it possible to query "every build that resolved package X version Y between time T1 and T2" with reasonable performance. Harness Artifact Registry shipped a similar query interface as part of its 2025 supply-chain governance push. The takeaway for operators is that retention of resolution logs is a security control, not just an operational one, and SBOM storage policies should be set to cover at least your full build-rollback window.
What still has to mature?
Two gaps remain visible to defenders. The first is air-gapped or offline deployments, where the mirror cannot subscribe to a real-time malicious-package feed and must rely on periodic manual updates. For organizations operating in those environments, the response window is measurably longer and the defender posture has to compensate through tighter version pinning and longer soak windows. The second is non-mainstream package ecosystems. NuGet, Maven Central, and Composer mirror support has caught up; less-supported ecosystems like Hex, Pub, or Hackage often have weaker mirror tooling, and defenders for those stacks have to combine direct-registry monitoring with manual policy enforcement.
How Safeguard Helps
Safeguard integrates with both Sonatype Nexus Repository and JFrog Artifactory as a policy and findings layer, ingesting resolution logs, SBOMs, and provenance metadata from the mirror and cross-referencing them against the OpenSSF malicious-packages feed, registry quarantine signals, and per-org policy rules. When an upstream quarantine fires, Safeguard surfaces the affected products inside your tenant and lists every build that resolved the tainted version from the mirror, with the exact build timestamp and pipeline run for retrospective analysis. Policy gates can be configured to enforce mirror feed freshness, block resolution of any version quarantined upstream within a configurable lookback window, and require pass-through provenance for tier-one packages. The provenance verification engine reads attestations forwarded through the mirror the same way it reads direct-registry attestations, giving downstream consumers a single consistent signal regardless of which path their package took into the build, and closing the mirror pass-through gap that Shai-Hulud exposed.