Open Source Security

Rust no_std Supply Chain Considerations

Writing Rust for embedded or kernel targets drops you into no_std territory, and the supply chain rules are different there. A practical look at what changes and why.

Shadab Khan
Security Engineer
7 min read

Most Rust security writing assumes std. Most Rust running in the places security teams care about most — bootloaders, firmware, hypervisors, embedded controllers, parts of operating system kernels — does not. The no_std environment is where Rust meets the hardware, and the supply chain questions are different there in specific, practical ways. Smaller dependency graphs. Different audit surface. A handful of crates whose compromise would echo across every no_std project on earth. If you are shipping Rust into an embedded target or a kernel module, the supply chain calculus is not a smaller version of the std one — it has its own shape.

What does no_std actually strip away?

The standard library. When you write #![no_std] at the top of a crate, you lose std::fs, std::net, std::thread, the default allocator, and the panic infrastructure, among other things. What remains is core (always available, no allocation, no operating system assumptions) and optionally alloc (if you wire in an allocator). This matters for supply chain because it shrinks the language-level surface area that any dependency can rely on. A no_std dependency tree is almost always smaller than its std equivalent, and that smaller tree is easier to audit in principle.

In practice, the smaller tree is not automatically safer — it just shifts where the interesting risks live.

Where do the interesting risks live in no_std?

Three places. Hardware abstraction crates like embedded-hal sit at the bottom of most embedded Rust trees, and their correctness is load-bearing for every project downstream. Allocator crates (linked_list_allocator, alloc-cortex-m, custom allocators) are tiny but in the critical path — a bug or compromise here breaks everything. Architecture support crates for specific MCUs or SoCs — cortex-m, riscv, x86_64 — expose inline assembly, peripheral register mappings, and low-level operations that are effectively impossible to safely reimplement if the upstream becomes malicious.

These crates tend to have small maintainer counts and modest download figures, which makes the classic supply chain heuristics (download volume, active contributor count) misleadingly low.

The no_std trust web is narrower than it looks

Run cargo tree on a typical embedded project targeting an STM32 microcontroller. You'll see ten or twelve crates — cortex-m, cortex-m-rt, stm32f4xx-hal, embedded-hal, embedded-hal-async, a few register access crates, some bit manipulation helpers. That tree looks healthy until you realise four of those crates are maintained by the same small group of people, and that group's compromise would give an attacker control of essentially every STM32F4 project written in Rust.

This is the flip side of the no_std ecosystem's coherence. It is small, well-coordinated, and technically excellent. The implication for threat modelling is that "how many people would need to be compromised for an attacker to reach our code?" is a surprisingly small number, even at Rust's strong baseline.

What does a meaningful audit look like?

For a no_std project, the audit is tractable in a way that modern std audits are often not. A focused review of the concrete set of crates in the tree — say 20–40 in total — is doable in a few engineer-weeks. The audit should cover:

  • All unsafe blocks in the dependency tree, traced to their soundness justifications. This is where memory safety actually matters in Rust.
  • All inline assembly, particularly in architecture support crates. asm! blocks that touch privileged instructions need review.
  • Build scripts (build.rs) in every crate in the tree. These run at compile time with full host privileges.
  • Proc macros, which also run at compile time. A proc macro can read your filesystem, open network connections, or exfiltrate source code during the build.

The cargo-vet tool fits this workflow well. Audit once, share across the organisation, and only re-audit deltas when dependencies change.

Panic behaviour is a supply chain issue too

In std Rust, panics have a well-defined unwinding path. In no_std, the panic handler is something you or a dependency provides. panic-halt, panic-semihosting, panic-abort, custom handlers — each of these is a supply chain component that gets a front-row seat to every panic in your application. A malicious panic handler could exfiltrate state, trigger specific hardware behaviour, or simply mislead operators by lying about what happened.

This is not a hypothetical. Any piece of code that runs on every panic has reach. Treat the panic handler with the same scrutiny as the allocator.

Build reproducibility is unusually achievable here

no_std builds tend to be more reproducible than average Rust builds because the dependency graph is smaller and the environment is more constrained (target triples like thumbv7em-none-eabihf are specific, no host-system entanglement). If your program is building bit-identical artifacts from Monday to Tuesday with the same toolchain, you have a cleaner baseline for supply chain verification than most projects. Use that. Record the artifact hash. Compare across builds. Divergence is a signal worth investigating.

Cargo features are where the subtle stuff hides

A no_std crate often supports both std and no_std modes via a std feature flag that the consumer disables. Check every transitive dependency's feature flags in your lock file. It is entirely possible to have a no_std project that accidentally pulls in std through a transitive dependency that defaulted the std feature on. When that happens, the "no_std" label becomes a lie that you carry through builds and reviews as if it were true.

cargo tree --features and cargo tree -e features are your friends here.

Real CVEs to know

A few reference points. CVE-2024-24575 (CVSS 5.3) in the shadow-rs build script illustrated how build-time code in a popular crate can execute arbitrary behavior during compilation. RUSTSEC-2023-0052 (webpki CPU-time DoS) affected some no_std-friendly crates in the embedded TLS world. The pattern across advisories: no_std crates get CVEs too, and the distribution is weighted toward low-level concerns (timing, unsafe soundness, build-script behaviour) rather than the application-layer issues that dominate std Rust advisories.

What should a team shipping no_std Rust actually do?

Four practical moves:

  1. Generate an SBOM at every release. cargo-cyclonedx works in no_std projects. Retain the SBOM alongside the firmware image.
  2. Pin the toolchain. A rust-toolchain.toml committed to the repo prevents nightly drift from changing your output silently.
  3. Audit the small tree exhaustively using cargo-vet or equivalent. Re-audit on every dependency change.
  4. Treat build scripts and proc macros as privileged code. Any update to a crate that exposes either gets a manual review, regardless of semver minor/patch framing.

The no_std ecosystem rewards this kind of discipline. The tree is small enough that the work fits in reasonable engineering budgets, and the payoff is a genuinely defensible supply chain posture for code that often runs in the most security-sensitive places a company ships.

How Safeguard Helps

Safeguard's Rust support covers no_std projects first-class — SBOM generation from Cargo.lock, reachability analysis that includes unsafe and proc-macro call paths, and policy gates that can require cargo-vet attestations for every crate in the tree. Griffin AI summarises the audit surface per release and flags changes in low-level crates (cortex-m, embedded-hal, allocator crates) with elevated scrutiny, because those are the ones where a semver-minor bump can still represent a material security event. For teams shipping firmware and embedded Rust into regulated markets, Safeguard ties the no_std build output back to the enterprise SSCS control plane without pretending that the std toolchain is what you are running.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.