Container Security

Container Breakout Class Vulnerabilities 2024-2025

A look at the container breakout vulnerabilities disclosed in 2024 and 2025, what they actually required to exploit, and what that pattern tells us about the defense model.

Shadab Khan
Security Engineer
7 min read

Container breakouts are the headline class of container vulnerability — the CVEs that let a process inside a container escape to the host. They are also, in terms of production impact, the most overstated class of vulnerability. Most of the disclosed breakouts of the last two years required conditions that are not present in a reasonably configured cluster, and the ones that worked in practice were almost always the ones that took advantage of misconfiguration rather than a genuinely novel escape primitive.

This is a look at what actually happened in 2024 and 2025 and what the pattern tells us about where to spend defensive effort.

What were the major container breakout CVEs of this period?

The notable ones were CVE-2024-21626 (runc "Leaky Vessels"), CVE-2024-23651 through CVE-2024-23653 (BuildKit), the Docker Engine AuthZ bypass CVE-2024-41110, and a handful of kernel-level escapes via cgroups v2 and the landlock LSM. A few Kubernetes-specific CVEs in kubelet rounded out the list.

CVE-2024-21626 was the standout. A race condition in runc's file descriptor handling allowed a container to inherit a file descriptor pointing to the host filesystem, which enabled escape via /proc/self/fd/<n>. This affected essentially every container runtime that used runc, which is essentially every mainstream runtime. The fix was fast and the patching rollout was broad, but the vulnerability demonstrated that runc's attack surface is still very much alive a decade after its introduction.

The BuildKit CVEs (23651–23653) were different in shape — they allowed a malicious Dockerfile to escape during build rather than during runtime. This matters for organizations running untrusted build workloads, such as public CI services or hosted build platforms. For organizations that control their Dockerfiles end-to-end, the risk surface was narrower.

The Docker Engine AuthZ plugin bypass (CVE-2024-41110) was a regression of an older issue and mostly affected organizations running authorization plugins in front of the Docker daemon, a smaller population than the runc CVE but one with high-security profiles.

Kernel-level escapes via cgroups v2 and landlock were less flashy and more interesting to a defender. The cgroups v2 work in particular demonstrated that the kernel itself continues to be a realistic attack surface for container escape, not just the runtime.

What did exploitation actually require?

Most of the year's breakouts required the attacker to already have code execution inside a container, often with specific capabilities that most production deployments should not grant. The "RCE in a container" to "root on the host" jump is still harder than the CVE listings suggest.

CVE-2024-21626 required the attacker to control a Dockerfile's WORKDIR or to control the exec being run. Both of those are possible in certain scenarios — shared multi-tenant build systems, container-as-a-service platforms, or CI environments where attackers can submit Dockerfiles. In a typical production deployment where the Dockerfile is your own code, the exposure was lower.

The BuildKit CVEs had similar requirements — the attacker had to control build inputs, which again only matters for multi-tenant builds.

The kernel-level escapes required specific capability grants. A container running with CAP_SYS_ADMIN is much closer to a root shell on the host than a container running without it. The fact that many organizations run containers with CAP_SYS_ADMIN to make some internal tool work is a configuration problem, not a kernel bug. The breakout CVE is the proximate cause of the incident; the CAP_SYS_ADMIN grant is the root cause.

What does this pattern tell us about defense priority?

Configuration beats patching as a defense strategy for container breakouts. This sounds defeatist and is not meant to be. Patching remains necessary. But the marginal return on aggressive breakout patching is lower than the return on reducing the attack surface through configuration, and most organizations are under-invested in the configuration side.

The concrete things that pay off: drop all capabilities by default and add back only what each container provably needs. Run containers as non-root users. Use readOnlyRootFilesystem: true. Mount /proc with the masks that Kubernetes applies by default — do not override them. Use seccomp profiles, starting with the default profile and moving toward runtime-generated profiles for individual workloads. Use AppArmor or SELinux where available.

Each of these controls reduces the exploitable surface for a future breakout CVE you have not heard about yet. A container running as a non-root user with a restrictive seccomp profile and no capabilities is a much smaller target for the next runc CVE than a container running with --privileged. This is the whole argument for defense in depth and it is well-worn territory, but the data from 2024 and 2025 keeps proving it.

What is the right patching strategy for breakout CVEs?

Patch the runtime within 72 hours for publicly exploited breakouts, within a week for others, and do not treat breakout patching as a bigger priority than patching the userspace libraries that actually get attacked in production.

This will be an unpopular ordering. The reason I push it is that the data on container-breakout exploitation in the wild is sparse. Public exploit code exists for most of these CVEs, but public reports of successful breakout exploitation in production environments are rare. Public reports of compromises via vulnerable userspace libraries — OpenSSL, Log4j, curl, older versions of Apache, Spring — are common.

If you have limited patching bandwidth (and you do), spend it proportional to the risk. Critical runtime CVEs are priority-one events. Most runtime CVEs are priority-three events. Userspace vulnerability patching in your application dependencies is priority-one forever.

How do hardened runtimes like gVisor and Kata change the picture?

They change it meaningfully for the class of attacks they protect against, and not at all for the class of attacks they do not. Know which is which.

gVisor intercepts syscalls in a user-space kernel (Sentry) rather than passing them directly to the host kernel. The effect is that many host-kernel vulnerabilities are not reachable from a gVisor-wrapped container. This is real protection against the cgroups v2 and landlock-style escapes. It does not protect you against vulnerabilities in gVisor itself, which have their own CVE history, and it does not protect you against application-layer compromises.

Kata Containers uses a lightweight VM for isolation, which puts a hypervisor boundary between the container and the host. This is stronger isolation than namespaces and cgroups alone, and it protects against most container-runtime escape primitives. It costs more CPU and memory per container and has a harder operational story — not every workload runs on Kata cleanly.

My recommendation in 2026 is that gVisor is appropriate for multi-tenant workloads where you would otherwise not run foreign code, and Kata is appropriate for workloads that genuinely require VM-level isolation. For your own first-party application workloads, hardened runc with good capability and seccomp hygiene is fine, and the switch to gVisor or Kata buys less than people assume.

How Safeguard.sh Helps

Safeguard.sh correlates container breakout CVEs with reachability analysis and actual runtime configuration, so a runc CVE in an environment where no containers run with the problematic capabilities shows up as low priority and the same CVE in an environment with misconfigured containers shows up as the critical issue it actually is — this is where the 60 to 80 percent alert-noise reduction comes from. Griffin AI scores breakout CVEs against your actual container security posture using SBOMs with 100-level dependency depth, and recommends seccomp and capability profiles tailored to each workload. TPRM tracks the runtime and kernel versions used by your third-party image suppliers, and container self-healing patches runtime and base image layers automatically when breakout CVEs are disclosed and fixed upstream, turning a 72-hour manual scramble into a 30-minute automated rollout with human approval gates where they matter.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.