Container Security

Kubernetes Secrets Encryption Providers Reviewed

etcd encryption at rest finally works out of the box. The question is which provider you use, and the trade-offs have sharpened in 2024.

Nayan Dey
Senior Security Engineer
8 min read

Kubernetes Secrets are stored in etcd. Until reasonably recently, they were stored in etcd base64-encoded and nothing else, which is to say plaintext in any meaningful sense. Someone who could read the etcd backend could read every secret in the cluster.

That is no longer the default for any serious distribution. But the options for fixing it have multiplied, and the trade-offs between them are not obvious on a first read of the documentation. This post walks through the providers available in Kubernetes 1.29 — the late-2023 release that stabilized KMS v2 — and the decisions that follow from each one.

The Baseline: identity

The default provider, if you configure nothing, is identity. It does not encrypt anything. Secrets land in etcd in their original form.

In 2024, running a production cluster with identity as the secret encryption provider is indefensible. Every managed Kubernetes service that exists — EKS, GKE, AKS, DigitalOcean, Linode, and every major Rancher and OpenShift deployment — configures something else out of the box. If you are running your own cluster with kubeadm or a comparable tool and you have not set up encryption at rest, that is a finding the CIS Kubernetes Benchmark will flag and that any meaningful auditor will mark as a deficiency.

The rest of this post assumes you have decided encryption is table stakes. The question is what to encrypt with.

aescbc: The Historical Default

aescbc uses AES-256 in CBC mode with a randomly-generated IV per Secret. The key is stored in a file on disk — specifically, in the EncryptionConfiguration resource that kube-apiserver reads at startup.

This is the simplest encryption provider and the one most people first encountered. It has real security properties: an attacker who gains access to etcd alone cannot read secrets without also reading the encryption configuration file. That file is typically on the control-plane nodes, which have tighter access controls than etcd itself.

The catch is key management. The AES key lives on disk. Rotating it requires writing a new key to the configuration, reading every Secret with the old key, re-encrypting with the new, and eventually removing the old key. Kubernetes supports this through the --encryption-provider-config flow and the kubectl get secrets -A | kubectl replace -f - migration pattern, but it is manual and error-prone.

For single-cluster deployments without existing KMS infrastructure, aescbc is a reasonable starting point. It is not a good long-term answer.

A subtlety: AES-CBC is technically susceptible to padding oracle attacks if the attacker has an oracle for decryption outcomes, which for Secret storage they generally do not. The cryptographic community has moved toward authenticated encryption modes, which is where aesgcm comes in.

aesgcm: The Improvement That Was Never Quite Recommended

aesgcm uses AES-256 in GCM mode, which is authenticated encryption. It detects tampering with ciphertexts, which aescbc does not.

In theory, this is the better option. In practice, the Kubernetes documentation has historically discouraged aesgcm because the implementation requires that IVs never repeat with the same key, and the provider uses random IVs without additional state to ensure non-repetition. At the scale of Secret storage — typically fewer than 10,000 Secrets per cluster — the birthday bound on 96-bit random IVs is not actually a practical concern. The documentation's caution is conservative to the point of being misleading.

Kubernetes 1.28 quietly softened the aesgcm warnings. In 2024, using aesgcm for a cluster with reasonable Secret count is fine, and the authentication property it provides is a real upgrade over aescbc. If you are configuring local-key encryption today, prefer aesgcm.

KMS v1: The Bridge to Hardware

The KMS provider in its v1 form was introduced in Kubernetes 1.10 and remained the stable KMS option through 1.26. It delegates envelope encryption to an external KMS: a key-encryption key (KEK) lives in the KMS (AWS KMS, GCP KMS, Azure Key Vault, HashiCorp Vault, or a hardware HSM), and per-Secret data-encryption keys (DEKs) are encrypted by the KEK.

The operational properties improve substantially. Key rotation happens at the KMS level. Audit logging of key usage happens at the KMS level. Access to the KEK is governed by cloud IAM or Vault policy, which is usually better-audited than file permissions on a control-plane node.

The operational cost is a new dependency. Every Secret read and every Secret write goes through the KMS. If the KMS is unavailable, kube-apiserver cannot serve Secret operations. Kubernetes caches DEKs in memory to mitigate this for reads, but writes still require a round trip.

The v1 implementation has some performance limits. It re-encrypts the DEK per-request rather than reusing DEKs across Secrets, and the KMS RPC is synchronous. On large clusters with high Secret churn, this shows up as latency on pod startup, because pod creation reads Secrets from the API server and hits the KMS path.

CVE-2023-5528 is worth mentioning here, even though it is not directly a KMS provider CVE: it was a kube-apiserver bug that under certain conditions could cause Secret ciphertext to be logged in plaintext form. The fix shipped in 1.28.4, 1.27.8, and 1.26.11. The broader point is that the encryption provider is not the only surface where secret confidentiality depends on kube-apiserver behaving correctly.

KMS v2: The Rewrite That Actually Delivered

KMS v2 was introduced as beta in Kubernetes 1.25 and graduated to stable in 1.29. It is a significant rewrite that addresses most of the v1 pain points.

The v2 implementation uses a longer-lived DEK, cached in the API server, and encrypts the DEK with the KMS-held KEK only periodically rather than per-request. This removes the per-Secret KMS round trip and makes performance essentially indistinguishable from local encryption for read-heavy workloads.

The new plugin protocol is a proper gRPC API with versioning, health checks, and key status endpoints. Plugin authors no longer have to reimplement the Kubernetes-specific request format.

KMS v2 also supports automatic key rotation detection: when the KMS reports a new KEK version, the API server re-encrypts DEKs with the new key without requiring a Secret migration pass.

For clusters on 1.29 and later, KMS v2 is the right choice. AWS, GCP, and Azure all ship KMS v2 providers for their managed Kubernetes services, and HashiCorp Vault added KMS v2 support in early 2024.

The Provider Selection Matrix

For a small cluster, single-environment, without existing KMS infrastructure: aesgcm. Simple to set up, real encryption, no new dependencies.

For a cluster that already uses a cloud KMS for other purposes (database encryption, object storage encryption, etc.): KMS v2 with the cloud provider's plugin. The operational story is consistent with how other secrets are handled, and the audit trail lands in the same place.

For regulated environments that require HSM-backed key storage: KMS v2 with Vault or a hardware HSM. The performance is acceptable now in a way it was not under v1.

For environments that should not have secrets encrypted with a single KMS provider (multi-region failover concerns, cryptographic agility requirements): the Kubernetes encryption configuration supports multiple providers in sequence, where the first provider is used for writes and all providers are tried for reads. This enables migration between providers and multi-KMS setups.

What Encryption at Rest Does Not Do

Secret encryption at rest protects against attackers who read etcd backups or snapshots. It does not protect against attackers who can read Secrets through the Kubernetes API, which is almost anyone with sufficient RBAC in the cluster. It does not protect against attackers who compromise kube-apiserver's memory. And it does not protect against attackers who compromise a pod that mounts the Secret.

For those threats, you need additional layers: tight RBAC on Secret resources, external secret management (Vault, AWS Secrets Manager, Secret Store CSI Driver), and runtime protections that prevent containers from dumping their mounted Secrets.

The encryption at rest story is a prerequisite, not a substitute. A cluster that has aesgcm encryption and also has every service account authorized to read every Secret is not in a good place. Fix both.

How Safeguard Helps

Safeguard audits Kubernetes encryption-at-rest configuration across every cluster you scan, identifying clusters using identity, legacy aescbc setups with old keys, and KMS v1 deployments that should migrate to v2. We correlate encryption configuration with Secret contents — not the values, the types — so a cluster storing cloud credentials or database passwords without strong encryption gets flagged as higher priority than one with only application config Secrets. Our policy gates enforce minimum encryption standards at deployment time, and our compliance reports translate provider choice into the specific controls that SOC 2, PCI-DSS, and FedRAMP auditors ask about. When KMS v2 providers receive CVE updates, we correlate the advisory with clusters actually using the affected provider version so the patch cadence has a clear target list.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.