Open Source Security

Rust Tokio Dependency Security Review

Tokio is the async runtime underneath most production Rust. A supply chain review of Tokio and the crates that orbit it — dependencies, CVE history, and what changes across versions.

Nayan Dey
Senior Security Engineer
6 min read

If you are running production Rust in 2024, you are almost certainly running Tokio. The async runtime underpins most Rust web frameworks, nearly all Rust database clients, and a sizable chunk of the gRPC and message-queue ecosystem. That position — sitting beneath the application, pulled in transitively by almost every other crate — makes Tokio one of the highest-leverage pieces of code in the Rust world from a supply chain perspective. A compromise at this layer would echo into thousands of downstream crates. A subtle bug at this layer affects a significant fraction of production Rust deployments. This post is a focused supply chain review of Tokio: its dependency surface, its CVE history, how versions differ, and what a team upgrading should actually audit.

What does Tokio's dependency graph look like?

Minimal, on purpose. Tokio's core (tokio crate) has a small number of direct dependencies — pin-project-lite, bytes, mio, parking_lot (optional), libc (on Unix), socket2. The total tree with features = ["full"] expands to around 30-40 crates including transitives, most of which are maintained by the Tokio team or by a handful of Rust infrastructure projects (Alice Ryhl, David Tolnay, Sean McArthur). The concentration of maintainership is high relative to the ecosystem reach.

This matters for two reasons. First, it means Tokio's supply chain posture is mostly determined by a small set of people and their operational security — which is good in the sense that those people are trustworthy and rigorous, and a risk in the sense that compromise of their development infrastructure has outsized impact. Second, it means auditing the tree is genuinely feasible if you are willing to invest the time.

What CVEs has Tokio had?

The formal CVE history is shorter than you might expect given the surface area. Notable:

  • CVE-2023-22466 (RUSTSEC-2023-0001): Data race in tokio::io::ReadHalf::unsplit. Fixed in 1.23.1. Minor impact in practice because few code paths actually exercised the affected combination.
  • CVE-2021-45710 (RUSTSEC-2021-0124): tokio::task::JoinHandle leaking task outputs in specific cancellation patterns. Fixed in 1.8.4 and 1.13.1.
  • RUSTSEC-2020-0081: Named pipe issue on Windows. Fixed in 0.2.22 and 0.3.0.

Tokio's advisory cadence is roughly one notable issue every 18-24 months, with rapid patch availability and clear communication. This is better than industry baseline for software of this reach.

Where does the threat surface actually live?

Four areas to mentally map:

The reactor / event loop. Tokio's core mio-based event loop is where OS-level I/O readiness gets translated into Rust futures. Bugs here can manifest as missed events, stuck tasks, or (worst case) data races in state that should be thread-local.

The scheduler. The multi-threaded work-stealing scheduler moves futures between OS threads. The soundness of this requires careful handling of Send bounds. Historical bugs have clustered here.

The sync primitives. tokio::sync::{Mutex, RwLock, Notify, Semaphore} have had edge cases over the years, usually around cancellation safety — what happens when a future waiting on a lock gets dropped at an awkward moment.

The I/O helpers. tokio::io::{AsyncRead, AsyncWrite} combinators and the specific implementations for sockets, files, and pipes. Most user-visible bugs show up here.

Knowing where the bugs tend to live helps prioritise version-to-version scrutiny.

What changes between versions matter?

Tokio is on 1.x and follows semver. Minor version bumps can still include material performance or behaviour changes that affect production. Recent history worth knowing:

  • 1.35 → 1.36: reworked task locals, small behaviour differences around task-local access.
  • 1.37 → 1.38: improved runtime metrics, some internal scheduler changes.
  • 1.39 → 1.40: experimental taskdump feature refinements.
  • 1.42: significant improvements to cooperative scheduling, important if you have long-running tasks that can starve others.

Upgrading across several minor versions in one go is technically supported but comes with review overhead. The cleaner pattern is upgrade one minor at a time with a performance regression check at each step.

Features flags have security implications

Tokio's feature flags are more than ergonomic knobs. Enabling net pulls in socket operations; enabling process pulls in child process handling; enabling signal adds Unix signal handling with its own subtleties. A service that only needs rt-multi-thread and macros should not enable full, because full drags in surface area you do not use and do not want to audit.

Run cargo tree -e features | grep tokio on your lock file. If you see features enabled that your code does not need, tighten the import.

The ecosystem matters as much as the crate

Tokio's supply chain posture is not just Tokio. It is Tokio plus the crates immediately downstream that everyone uses — hyper, reqwest, tonic, sqlx, axum, warp, tower. A vulnerability in any of these can cascade into your service with the same practical effect as a Tokio vulnerability, because you probably have at least three of them in every non-trivial Rust application.

The same audit posture — pin versions, review diffs on minor upgrades, treat unsafe and build scripts with elevated scrutiny — applies to the immediate Tokio ecosystem as much as to Tokio itself.

What should a team actually do when Tokio releases a new version?

Five-step checklist:

  1. Read the CHANGELOG top to bottom. Tokio's release notes are thorough. The relevant section for your service is usually one or two lines but those lines matter.
  2. Diff the dependency tree. cargo tree before and after. Any new transitive dependency is a new audit item.
  3. Run the service's full test suite with RUST_LOG=tokio=debug in staging. Many runtime-level regressions surface in logging volume or timing before they surface in tests.
  4. Run a performance regression check. Tokio's scheduler is sensitive to specific workload patterns; a regression of 2-5% in request latency is detectable and sometimes meaningful.
  5. Deploy behind a flag if possible. Not always feasible, but when it is, canary the new Tokio version against a subset of traffic before full rollout.

The work is modest. The upside is that you keep a load-bearing piece of infrastructure current without running the risk of an untested major upgrade later.

How Safeguard Helps

Safeguard tracks Tokio and its high-impact downstream crates (hyper, reqwest, tonic, sqlx, axum) as named high-criticality components in the reachability graph. Policy gates can require an explicit review before any Tokio minor version bump reaches production, and Griffin AI produces a diff summary of scheduler, runtime, and I/O changes across any two Tokio versions so the upgrade review is a concrete document rather than a prose skim of the CHANGELOG. For Rust-heavy services whose request path runs through Tokio, Safeguard makes the single most important crate in your supply chain a first-class, tracked component rather than a footnote in your Cargo.lock.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.