Kubernetes 1.31, codenamed "Elli," was released on August 13, 2024. While every Kubernetes release includes security-relevant changes, 1.31 stands out for moving several long-awaited security features to general availability and introducing new capabilities that address real-world security pain points.
For security teams managing Kubernetes deployments, here is what matters in this release and how it affects your security posture.
AppArmor Support Reaches GA
The headline security feature of Kubernetes 1.31 is the promotion of AppArmor support to general availability (GA). AppArmor has been available in Kubernetes since version 1.4, but only as a beta feature using annotations. In 1.31, AppArmor profiles are now a first-class field in the pod security context.
What changed:
Previously, AppArmor profiles were specified via annotations on the pod:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/nginx: localhost/my-profile
In 1.31, AppArmor is specified directly in the security context:
securityContext:
appArmorProfile:
type: Localhost
localhostProfile: my-profile
Why it matters:
AppArmor provides mandatory access control (MAC) that restricts what a container process can do, even if the process is running as root or has escaped its container namespace. AppArmor profiles can limit:
- File system access (read, write, execute permissions for specific paths).
- Network operations (binding to ports, connecting to addresses).
- Capability usage (which Linux capabilities a process can exercise).
- Signal handling and ptrace operations.
Moving to GA means that AppArmor configuration is now validated by the Kubernetes API server, included in the pod security admission controller checks, and supported in the official CRD ecosystem. Organizations that were hesitant to use a beta feature in production can now confidently deploy AppArmor profiles.
Recommendation: If you are running Kubernetes on Linux distributions that support AppArmor (Ubuntu, Debian, SUSE), develop and deploy AppArmor profiles for your workloads. Start with the runtime/default profile and customize from there.
Improved Image Pull Policy Enforcement
Kubernetes 1.31 introduced enhanced control over image pull behavior, addressing a long-standing security concern.
The AlwaysPullImages admission controller has been available for years, but it had limitations. Kubernetes 1.31 added improvements to image pull policy enforcement that make it harder for pods to use cached images that they should not have access to.
The security issue: In multi-tenant clusters, if one tenant pulls a private image and the image is cached on the node, another tenant could potentially schedule a pod on the same node using the IfNotPresent pull policy and access the cached image without valid credentials.
The improvement: Enhanced admission controls and policy enforcement in 1.31 make it easier to mandate Always pull policies in multi-tenant environments and to enforce image provenance checks before containers start.
Pod Security Admission Updates
The Pod Security Admission (PSA) controller, which replaced the deprecated PodSecurityPolicy, received refinements in 1.31:
Better audit logging: PSA now provides more detailed audit log entries when pods violate security policies. This makes it easier to identify workloads that are non-compliant and to track compliance over time.
Refined baseline profile: The baseline security profile received updates to align with current best practices, including tighter restrictions on certain volume types and process capabilities.
Warn mode improvements: The warn enforcement mode now provides more actionable feedback to developers, including specific remediation guidance for policy violations.
For organizations still migrating from PodSecurityPolicy to Pod Security Admission, 1.31 provides a more complete and practical replacement.
Secret Management Enhancements
Kubernetes 1.31 continued the evolution of secret management with improvements to how secrets are handled at rest and in transit:
Encryption configuration improvements: The KMS (Key Management Service) v2 API, which reached GA in 1.29, received performance optimizations in 1.31. These improvements reduce the latency of secret operations when using external KMS providers like HashiCorp Vault, AWS KMS, or Google Cloud KMS.
Bound service account tokens: Improvements to bound service account token volumes reduce the risk of token theft and replay. Tokens are now more tightly scoped and have shorter default lifetimes.
Recommendation: If you are not already encrypting secrets at rest using a KMS provider, this is the time to implement it. The performance characteristics in 1.31 make KMS-based encryption practical for high-throughput workloads.
Network Policy Enhancements
Network policy capabilities continued to evolve:
AdminNetworkPolicy: The AdminNetworkPolicy resource (still in beta in 1.31) allows cluster administrators to define network policies that take precedence over namespace-level NetworkPolicy resources. This is crucial for enforcing cluster-wide security boundaries that tenant workloads cannot override.
Improved policy status reporting: Network policy implementations now provide better status reporting, making it easier to verify that policies are actually being enforced by the CNI plugin.
Recommendation: If you are using a CNI plugin that supports AdminNetworkPolicy (Cilium, Calico), begin piloting cluster-wide network policies for common security boundaries like restricting metadata service access and enforcing egress controls.
Deprecations and Removals with Security Impact
Several deprecated features were removed or further restricted in 1.31:
In-tree cloud provider removal continues: The migration from in-tree cloud provider code to external cloud controller managers continued. This reduces the attack surface of the core Kubernetes codebase by removing cloud-provider-specific code that historically contained vulnerabilities.
Legacy service account token deprecation: The automatic mounting of non-expiring service account tokens is further deprecated. Organizations that have not migrated to bound service account tokens should prioritize this.
Upgrade Considerations for Security Teams
When planning your upgrade to 1.31, security teams should:
-
Audit existing AppArmor annotations: If you are using AppArmor via annotations, plan to migrate to the new security context fields. The annotations still work in 1.31 but will eventually be removed.
-
Review Pod Security Admission profiles: Check your PSA configuration against the updated baseline and restricted profiles. Workloads that were compliant under previous versions may need adjustments.
-
Test image pull policy enforcement: If you are running multi-tenant clusters, verify that image pull policies are correctly configured and enforced after the upgrade.
-
Update RBAC policies: Review cluster RBAC policies to ensure they align with new API fields and resources introduced in 1.31.
-
Validate network policies: Test that your network policies continue to function as expected after the upgrade, particularly if you are using newer features like AdminNetworkPolicy.
The Kubernetes Security Journey
Kubernetes 1.31 represents continued maturation of the platform's security model. The trend across recent releases has been clear:
- Moving security features from alpha/beta to GA, making them production-ready.
- Reducing default permissions and tightening default configurations.
- Improving auditability and visibility into security-relevant operations.
- Integrating with external security systems (KMS, OPA, AppArmor) rather than building everything in-house.
This trajectory aligns with the broader industry shift toward defense in depth, where the orchestration platform provides security primitives that organizations compose into comprehensive security strategies.
How Safeguard.sh Helps
Kubernetes security depends on visibility into every component in the cluster.
- Container image SBOM generation catalogs every dependency in your container images, providing vulnerability visibility that extends below the Kubernetes orchestration layer.
- Cluster component tracking monitors Kubernetes version, CNI plugin version, and other cluster infrastructure components for known vulnerabilities.
- Configuration compliance verifies that your Kubernetes deployments follow security best practices, including pod security contexts, network policies, and RBAC configurations.
- Continuous vulnerability monitoring alerts you when new CVEs affect any component in your Kubernetes stack, from the control plane to the application dependencies running in your pods.
Kubernetes security is only as strong as the weakest component in your cluster. Safeguard.sh ensures you can see them all.