Open Source Security

Rust crates.io Supply Chain Controls in 2026

crates.io has gained real supply chain features over the past two years. Here is an honest read on what works, what is still immature, and where to invest.

Shadab Khan
Security Engineer
6 min read

Rust's package registry has spent the last couple of years catching up to the rest of the ecosystem on supply chain features. The catch-up is genuine and mostly well-executed; crates.io no longer looks like the frontier town it was in 2022. But the gaps that remain are exactly the ones that matter for large production users, and if you are evaluating Rust for a regulated workload in 2026, you want an accurate picture of where you actually stand.

What supply chain features does crates.io actually have in 2026?

The short list: OIDC-based trusted publishing for GitHub Actions, verified-email-backed account recovery, scoped API tokens with per-crate permissions, yanking with clearer user-facing indicators, and an improving but still partial story for signing and provenance. The Rust Foundation and crates.io team have shipped most of what they committed to in the 2024 and 2025 roadmaps, and the big remaining deliverable is first-class attestation support at the registry layer.

What is not there yet: namespace reservations analogous to npm scopes, organization accounts with multi-maintainer governance baked in, and a general-purpose audit log that users can query. Some of these are in flight. Namespacing in particular is under active discussion and has been for long enough that the answer is probably "not this year" again.

How does cargo's trusted publishing compare to npm and PyPI?

The design is familiar: a crate owner registers a GitHub Actions workflow as a trusted publisher, the workflow uses OIDC to exchange an identity token for a short-lived publish token, and the long-lived API token gets retired. The implementation is a year or two behind PyPI in polish. You get fewer configuration options, the environment-binding story is less mature, and the documentation assumes more knowledge of OIDC internals than most Rust users have.

The consequence is that most Rust projects in 2026 still publish with long-lived tokens stored in CI. The feature exists, works, and is the right direction, but adoption is maybe a third of what PyPI sees. For your own projects, you should absolutely be migrating; for the ecosystem at large, do not assume a random crate you depend on has done so.

Are namespace confusion and typosquatting actual problems for Rust?

They are, just less visibly than in npm. crates.io does not have a native scope or namespace primitive, so every crate name competes in a single global namespace. Typosquatting attempts happen regularly; most get caught by the registry team, but the detection is human-driven and has a measurable latency. The largest real-world incidents I have helped investigate in the Rust ecosystem started as typosquats of internal crate names that were never published publicly, followed by public publication of the squatted name and hope that a misconfigured CI would pull it.

The mitigation is boring and effective: publish a zero-content placeholder crate with the names you care about, so nobody else can take them. It costs nothing and it closes the attack. Large Rust shops have been doing this for their internal crate prefixes for years. If you maintain a Rust project with any internal dependencies, do this today.

What does yanking actually mean on crates.io, and when should you care?

Yanking a crate marks a specific version as unavailable for new dependency resolution but leaves it available for existing lockfiles. It is not a delete. It is more like a "do not pick this up in fresh builds" flag. The use cases are accidentally published secrets, broken releases, and known-bad versions that you want to pull out of the default resolution path without breaking downstream builds that already picked the version.

What yanking is not, and what a lot of teams get wrong, is a security response primitive. If a yanked version contains malicious code, users who already installed it are not notified by the yank. Security advisories through the RustSec database are the actual notification channel, and that is a separate, manually-curated database run by volunteers. If you depend on crates-io's notification behavior for security events, you will miss things. The operational pattern is to run cargo audit regularly against the RustSec database as a separate control, not to rely on yank status.

How good is the RustSec advisory database in 2026?

Surprisingly good for a volunteer-run effort. Coverage of known-bad crates is reasonable, the advisory format is well-defined, and the tooling around it (cargo audit, cargo deny) is stable and widely deployed. The weaknesses are exactly what you would expect from any volunteer-curated database: advisories are sometimes slow to land, coverage of less-popular crates is thinner, and there is a visible bias toward the parts of the ecosystem the active maintainers happen to care about.

For production use, treat RustSec as the primary signal for known vulnerabilities but build a second signal for yanked versions, abandoned maintainers, and crates with suspicious ownership changes. The second signal is where commercial tools add value, and where the registry is least helpful on its own.

What about binary artifacts and build.rs scripts?

This is the part of the Rust supply chain story that worries me most in 2026, and it has barely improved. A crate can ship a build.rs that runs arbitrary code on your machine at compile time, download binary blobs during that build, and link them into your final binary with very little visibility. cargo does not sandbox build scripts by default. The ecosystem has been circling a solution for years, and proposals around a sandboxed build script environment keep getting discussed and deferred.

The practical controls are coarse: run builds in ephemeral, network-restricted environments; keep a short allowlist of crates with build scripts you have actually audited; and treat any new build.rs in a dependency diff as something worth a human reviewer's attention. Tools like cargo-supply-chain and cargo-vet help with the bookkeeping but do not eliminate the underlying risk. If you are shipping Rust in a regulated environment, your threat model needs to explicitly address build scripts, because the registry's defaults will not.

Should you mirror crates.io internally?

For any organization past a few dozen engineers, yes. crates.io's bandwidth has been rock-solid in recent years, but the reasons to mirror are not about availability. They are about being able to freeze a known-good snapshot of the registry, scan incoming crates before they hit developer laptops, and apply your own policy before resolution. Tools like panamax make this straightforward. The hard part is not running the mirror; it is the organizational discipline to make the mirror authoritative instead of letting developers routinely bypass it.

How Safeguard.sh Helps

Safeguard.sh monitors your Rust dependency graph for yanked versions, RustSec advisories, ownership changes on upstream crates, and suspicious build.rs additions that show up in new releases. We correlate trusted publishing status with your policy for critical internal dependencies and alert when a package you rely on is still publishing with long-lived tokens against your stated policy. The aim is to give Rust shops the same supply chain visibility that more mature ecosystems take for granted, without waiting for crates.io to ship every missing feature.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.