Container Security

Kubernetes 1.33 Security Deep Dive

Kubernetes 1.33 shipped with meaningful security changes: stronger admission controls, expanded structured authorization, and several deprecations that will affect production clusters.

Nayan Dey
Senior Security Engineer
6 min read

Kubernetes 1.33 released in mid-2025 with a security-adjacent changelog that is longer than any release since 1.28. Several of the changes are the graduation or deprecation of features that have been in beta across multiple releases, so they do not represent net-new functionality, but the aggregate effect on production clusters is real. If you run Kubernetes at any non-trivial scale, 1.33 requires more than a routine upgrade pass — policies, audit configurations, and third-party admission controllers need review. This post is the operator-focused rundown of what 1.33 actually changed from a security perspective, what production clusters should adjust before upgrading, and what the deprecation timeline looks like for the features that are on the way out.

What are the most consequential security changes in 1.33?

Four worth front-loading:

Structured authorization config GA. The structured authorizer configuration (allowing multiple authorizers in a specific order, with explicit rules) graduated to stable. This is a significant change for clusters using custom webhook authorizers because the configuration file format is now the canonical way to express authorizer chains.

Validating admission policy expression improvements. The CEL (Common Expression Language) based ValidatingAdmissionPolicy got expression-level improvements and better message handling. This is the admission path that complements Kyverno/Gatekeeper and is increasingly used for policies that are simple enough not to warrant a full policy engine.

Pod security admission maturity. Several rough edges in the Pod Security Admission controller (which replaced PodSecurityPolicy in 1.25) were smoothed in 1.33, specifically around namespace labeling and audit/warn mode behavior.

Service account token projection enhancements. More granular controls over projected service account token lifetimes and audience scoping, which is the right direction for workload identity.

What deprecations will affect production?

Three worth tracking:

In-tree cloud provider code continues to retire. Several cloud-provider-specific in-tree integrations continued their migration out of kubelet and kube-controller-manager in favor of external cloud controller managers. If your cluster still relies on in-tree cloud code, this is the release to migrate.

Legacy cgroup v1 support. Deprecation signals around cgroup v1 tightened. Clusters running older kernels or container runtimes tied to cgroup v1 should plan migration; 1.33 is not the removal release but the writing is on the wall.

Older feature gates. Several feature gates that have been GA for multiple releases are now removed, meaning the features are always-on and cannot be disabled. This rarely breaks things but is worth auditing if your cluster relies on toggling specific gates.

How does ValidatingAdmissionPolicy compare to Kyverno and Gatekeeper after 1.33?

ValidatingAdmissionPolicy (VAP) is now production-viable for the subset of policies that can be expressed in CEL. The trade-offs:

  • VAP strengths: no extra controller to operate, lower latency (in-process with the API server), straightforward for simple allow/deny policies.
  • VAP limitations: CEL is less expressive than Rego or Kyverno YAML for complex policies, no mutating-webhook equivalent (VAP is validating only), no advanced features like policy exceptions with workflows.
  • Kyverno strengths: richer policy expression, better tooling for policy-as-code workflows, mature mutation support.
  • Gatekeeper strengths: Rego is the most expressive option and scales to complex policies.

The emerging pattern in 2025: VAP for simple "must have label X," "cannot use image Y registry"-style rules; Kyverno or Gatekeeper for richer supply-chain policy enforcement. Running VAP alongside Kyverno is increasingly common.

What is the structured authorization config and why does it matter?

Pre-1.33 clusters expressing complex authorizer chains had to use command-line flags that were order-sensitive and easy to misconfigure. Structured authorization config is a YAML/JSON file that declares the authorizer chain explicitly — including webhook authorizers with their timeouts and failure modes. For clusters using Node authorizer + RBAC + one or more webhook authorizers (which is most production clusters), the structured config makes the chain auditable and version-controllable.

Upgrade impact: if you are using webhook authorizers via flags, you should migrate to structured config before 1.34, because the flag-based path is on a deprecation track.

What about CVE and patch landscape around 1.33?

Kubernetes has a well-run security response process, and 1.33 inherits several patches that landed during its development cycle. Notable:

  • Fixes to specific container escape paths in specific runtime configurations.
  • Improvements to kubelet's handling of specific malformed configurations.
  • Patches to API aggregation and extension-apiserver handling.

None of these are a "upgrade immediately" crisis if you are currently on a supported version, but 1.33 consolidates multiple patches into a cleaner baseline.

What should be verified before upgrading production?

Five-item pre-upgrade checklist:

  1. Third-party admission controller compatibility. Kyverno, Gatekeeper, and commercial admission controllers all need version-check against 1.33. Most vendors have updates.
  2. API version removals. Check any workloads or manifests still using deprecated API versions that are removed in 1.33.
  3. Pod Security Admission mode per namespace. If you are running in warn mode, audit the warn output now; enforce mode may surface issues.
  4. Authorizer configuration migration plan. If using webhook authorizers, plan the structured config migration.
  5. Controller workloads that depend on specific in-tree cloud provider behavior. Verify cloud-controller-manager covers the cases.

What is a reasonable upgrade cadence target in 2025?

Run N or N-1 in production. Kubernetes ships three releases per year with a ~14-month support window; being more than one release behind starts compressing the decision timeline on security-relevant backports. Mid-2025 means 1.33 is current; clusters on 1.31 or older are outside the comfort zone.

Upgrading from two versions back is a bigger operation than upgrading one version; the aggregated deprecations and behavior changes compound. Budget accordingly.

How Safeguard Helps

Safeguard's Kubernetes posture module pulls the cluster version, admission controller configuration, Pod Security Admission state, authorizer chain, and feature gate posture across the cluster fleet into a single view. Policy gates can require cluster version ≥ N-1 and verify admission controller configuration matches the declared baseline. Griffin AI flags clusters that have drifted from the declared posture (structured authorizer config changes, admission controller downgrades, unexpected feature gate toggles) and summarizes upgrade readiness for each cluster against a target version. For organizations running many Kubernetes clusters across accounts and regions, Safeguard provides the fleet-level visibility that turns a 1.33 upgrade into a tracked program rather than a cluster-by-cluster scramble.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.