Container Security

containerd Security Configuration Guide

containerd runs most of Kubernetes today. Its defaults are reasonable, but reasonable is not hardened. Here is how to close the gaps.

Shadab Khan
Security Engineer
6 min read

containerd used to be the quiet layer underneath Docker. Today it is the default container runtime for almost every managed Kubernetes service — EKS, GKE, AKS, DigitalOcean — and a direct dependency for anyone still building on Docker Engine. If you run containers in production, you run containerd, whether or not your team realises it.

The defaults in containerd are reasonable. They are not, however, hardened. Between the 1.7.x branch that most clusters still use in mid-2024 and the 2.0 release that landed later in the year, there is a long list of configuration choices that separate a demo-grade installation from something you would happily put in front of a regulator. This guide walks through the ones that matter.

Why containerd Defaults Are Not Enough

When you install containerd via a package manager or let kubeadm pull it in, you get a /etc/containerd/config.toml that enables the CRI plugin, the default runtime (runc), and a plain HTTP registry config. It works. It is also wide open in ways that matter for production.

The default seccomp profile is not applied unless you explicitly ask for it. AppArmor confinement relies on the host having a profile loaded — containerd will not install one for you. The CRI streaming server binds to a Unix socket that anyone in the containerd group can talk to, which on many distributions ends up being broader than intended. And image pulling happens over HTTPS but without sigstore or cosign verification unless you wire it up yourself.

None of this is a bug. containerd is a building block, and the maintainers leave policy decisions to the operator. The problem is that the block ships assembled and running before most operators realise they had decisions to make.

CVEs That Shaped the 2023-2024 Configuration Advice

A few recent issues are worth keeping in mind because they directly influence what "hardened" means today.

CVE-2024-21626, the runc leaked fd vulnerability disclosed in January 2024, affected every container runtime that shipped runc 1.1.11 or earlier. The attack exploited a file descriptor leak in runc exec that allowed a crafted image or exec command to escape the container namespace. containerd 1.7.13 and 1.6.28 shipped fixes by bundling a patched runc, but clusters that pinned older runc binaries stayed exposed for months.

CVE-2023-25153, disclosed in February 2023, was a denial-of-service in containerd's image pulling code where an OCI image could be crafted to exhaust memory during decompression. The fix shipped in 1.6.18 and 1.5.18. It is the kind of vulnerability you only notice during an incident, and the mitigation is as much about image provenance as it is about upgrading the runtime.

CVE-2022-23648, older but still relevant, allowed a crafted image with a specially set ROOT path to access files on the host. It was fixed in containerd 1.6.1, but it set the pattern for the kind of image-based runtime escape that continues to inform seccomp and AppArmor guidance.

The through-line is that containerd's attack surface is dominated by what it does with images and with runc, not by the daemon itself. Your configuration should reflect that.

The Configuration Settings That Actually Matter

Start with /etc/containerd/config.toml and make the following changes explicit rather than leaving them to defaults.

Set SystemdCgroup = true under the runc options. This is not strictly a security setting, but it is a prerequisite for cgroup v2, which in turn unlocks the memory and PID limit enforcement that container breakouts depend on failing to bypass. Kubernetes 1.28 and later assume systemd cgroups on systemd-based hosts, and containerd 2.0 makes it the default.

Enable the default seccomp profile by setting disable_apparmor = false and default_runtime_name = "runc" with an explicit runtimes.runc.options block pointing at a hardened profile. Kubernetes can layer its own RuntimeDefault seccomp on top, but containerd's baseline should still block the obvious syscalls — keyctl, add_key, request_key, clone with CLONE_NEWUSER for non-privileged workloads — before the pod spec ever gets a chance to override.

Configure image verification. containerd 1.7 introduced the cri.registry.configs.*.tls and hosts.d mechanism that lets you pin certificates per registry, and it supports cosign signature verification through the containerd-nydus-snapshotter project. If you pull images from a public registry without signature verification, you are accepting whatever an attacker who compromises the registry feels like giving you. Turn it on.

Restrict the CRI socket. The default location at /run/containerd/containerd.sock is owned by root and the containerd group. On many distros the group ends up with unexpected members. Set the socket ownership explicitly, use grpc.uid and grpc.gid in the config, and audit what processes actually need access.

Disable unprivileged port binding inside containers unless you need it. The net.ipv4.ip_unprivileged_port_start sysctl lets workloads bind ports below 1024 without CAP_NET_BIND_SERVICE. If you do not need it, leave the capability dropped and keep the sysctl at its default.

Logging and Audit You Will Want Later

containerd's logging defaults send everything to journald at info level. For security purposes, that is not enough. You want to see every image pull, every exec, and every snapshot creation, because those are the events that show up in container breakout timelines.

Enable the CRI metrics endpoint and scrape it with Prometheus. The containerd_container_status and containerd_tasks_created_total metrics are the closest thing you get to audit logs, and they are indispensable when something goes wrong.

For deeper audit, run containerd with --log-level debug in a staging environment to understand what normal looks like, then use the CRI streaming logs in production with a log shipper that forwards to a SIEM. Falco, Tracee, and Tetragon all tap into the same kernel events and give you a second independent view — I recommend running at least one of them alongside containerd in any environment that handles regulated data.

The containerd 2.0 Transition

containerd 2.0 was released in November 2024 and brings a handful of changes that matter for security. The Sandbox API is now the primary way pod sandboxes are managed, which makes it easier to swap in Kata Containers or gVisor without fighting the runtime. Support for cgroup v1 is deprecated. The legacy io.containerd.runtime.v1.linux runtime is removed entirely — if you have anything still referencing it, upgrading 2.0 will break your workloads.

The practical advice for mid-2024 is to stay on the 1.7.x line until your distribution ships 2.0 with cgroup v2 as the tested default, and to use the intervening months to audit your seccomp and AppArmor profiles. The transition itself is not hard, but the profiles are where bugs hide.

How Safeguard Helps

Safeguard tracks container runtime versions and their known CVEs across every cluster you scan, so you can see at a glance which nodes are still running containerd 1.6.27 with CVE-2024-21626 exposure. We correlate runtime findings with the images they host, which means a vulnerable base image on an under-hardened runtime surfaces as a combined risk rather than two unrelated tickets. Our policy engine lets you enforce minimum containerd versions, require signed images, and flag clusters where the CRI socket is reachable from unexpected groups. If you are planning a 1.7 to 2.0 migration, we generate the delta of deprecated features you are still using so the upgrade stops being a guessing game.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.