PodSecurityPolicy (PSP) was deprecated in Kubernetes 1.21 and removed in 1.25. If you are still running PSPs, your upgrade path is blocked. If you skipped PSPs entirely, you now need to deal with Pod Security Standards (PSS) anyway because running Kubernetes without pod-level security controls is reckless.
The new Pod Security Admission controller is simpler than PSPs, but "simpler" does not mean "trivial." Teams are getting this wrong in specific, predictable ways.
What Pod Security Standards Actually Are
Pod Security Standards define three security levels for pods:
Privileged
No restrictions. The pod can do anything the node kernel allows. This is the default if you do not configure anything, which is the core problem.
Use case: System-level components like CNI plugins, log collectors, or node monitoring agents that genuinely need host access.
Baseline
Blocks known privilege escalation vectors while remaining compatible with most workloads. Key restrictions:
- No privileged containers
- No host namespaces (hostPID, hostIPC, hostNetwork)
- No dangerous volume types (hostPath with write access)
- No
NET_RAWcapability (used in ARP spoofing) - Restricted
seccompandAppArmorprofiles - No
procMountoptions beyond the default
Most applications run fine under Baseline without modification. If yours does not, that is worth investigating.
Restricted
The strictest level. Everything in Baseline plus:
- Must run as non-root
- Must drop ALL capabilities
- Must set
readOnlyRootFilesystem(recommended, not enforced in all configs) - Seccomp profile must be explicitly set to
RuntimeDefaultorLocalhost - No privilege escalation (
allowPrivilegeEscalation: false)
This is the target for production workloads. Getting here requires actual work on your container images and pod specs.
How the Admission Controller Works
The Pod Security Admission controller operates at the namespace level using labels:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Three modes per namespace:
- enforce — Rejects pods that violate the policy
- audit — Allows the pod but logs the violation
- warn — Allows the pod but sends a warning to the user
The smart migration strategy: set enforce to your current level, audit and warn to your target level. This shows you what would break without actually breaking it.
labels:
# Currently enforcing baseline
pod-security.kubernetes.io/enforce: baseline
# But auditing and warning at restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Migrating from PodSecurityPolicy
Step 1: Audit Current PSP Usage
Before removing PSPs, understand what they are doing:
# List all PSPs
kubectl get psp
# Check which service accounts are bound to which PSPs
kubectl get clusterrolebinding -o json | \
jq '.items[] | select(.roleRef.name | test("psp")) | {name: .metadata.name, subjects: .subjects}'
Step 2: Map PSPs to Pod Security Levels
Most PSPs map cleanly to one of the three levels:
| PSP Setting | Baseline | Restricted |
|---|---|---|
| privileged: false | Yes | Yes |
| hostNetwork: false | Yes | Yes |
| runAsUser: MustRunAsNonRoot | No | Yes |
| requiredDropCapabilities: [ALL] | No | Yes |
| allowPrivilegeEscalation: false | No | Yes |
If your PSP falls between Baseline and Restricted, you need to either tighten your pods to meet Restricted or accept Baseline.
Step 3: Label Namespaces Incrementally
Start with warn mode across all namespaces:
# Apply restricted warnings to all non-system namespaces
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep -v kube-); do
kubectl label ns "$ns" \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/audit=restricted \
--overwrite
done
Run your workloads normally. Check audit logs and user-facing warnings. Fix violations. Then move to enforce.
Step 4: Remove PSPs Last
Only after all namespaces have PSS enforcement enabled:
# Disable the PSP admission controller in the API server
# --disable-admission-plugins=PodSecurityPolicy
# Then clean up PSP resources
kubectl delete psp --all
kubectl delete clusterrole --selector=kubernetes.io/bootstrapping=rbac-defaults -l psp=true
Common Migration Failures
Init Containers Get Forgotten
Init containers run with the same security context restrictions as regular containers. Teams often have init containers that run as root for database migrations or volume permission fixes:
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /data"]
securityContext:
runAsUser: 0 # This fails under Restricted
volumeMounts:
- name: data
mountPath: /data
Fix: Use fsGroup in the pod security context instead:
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
Helm Charts Ship Insecure Defaults
Many popular Helm charts do not comply with Restricted out of the box. Redis, PostgreSQL, Elasticsearch—they often run as root or require capabilities.
Audit every chart:
helm template my-release bitnami/redis | \
kubectl apply --dry-run=server -f - 2>&1 | \
grep -i "forbidden\|warning"
Service Mesh Sidecars Need Capabilities
Istio and Linkerd inject sidecar containers that may need NET_ADMIN or NET_RAW capabilities for traffic interception. This conflicts with Restricted.
Options:
- Use ambient mesh mode (Istio) which eliminates sidecars
- Use CNI plugins that handle traffic redirection at the node level
- Exempt the service mesh namespace from Restricted enforcement
Beyond the Built-In Controller
The built-in admission controller is intentionally limited. It only supports three predefined levels. You cannot customize rules per namespace or create exceptions for specific workloads.
For granular control, pair PSS with a policy engine:
# Kyverno policy: enforce restricted but exempt monitoring
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restricted-with-exceptions
spec:
validationFailureAction: enforce
rules:
- name: restrict-privilege-escalation
match:
resources:
kinds:
- Pod
exclude:
namespaces:
- monitoring
- kube-system
validate:
message: "Privilege escalation is not allowed"
pattern:
spec:
containers:
- securityContext:
allowPrivilegeEscalation: false
This gives you the simplicity of PSS as a baseline with the flexibility of custom policies for edge cases.
Monitoring Compliance
Labels and audit logs are your primary observability tools:
# Check namespace labels
kubectl get ns -L pod-security.kubernetes.io/enforce
# Query audit logs for PSS violations
# (depends on your audit log backend)
kubectl logs -n kube-system kube-apiserver-* | \
grep "pod-security.kubernetes.io"
In production, pipe audit events to your SIEM. PSS violations in audit mode are early warnings that someone is deploying non-compliant workloads.
The Bottom Line
Pod Security Standards are less flexible than PodSecurityPolicy but cover 90% of use cases with a fraction of the complexity. The migration path is clear:
- Label namespaces with
warnandauditat your target level - Fix violations in workloads
- Move to
enforce - Use a policy engine for the remaining 10% that needs custom rules
The biggest risk is not the migration itself. It is running Kubernetes clusters with no pod-level security controls at all, which is what happens by default.
How Safeguard.sh Helps
Safeguard.sh monitors your Kubernetes clusters for pod security compliance and integrates with your existing PSS configuration. It provides visibility into which namespaces are enforcing which levels, flags workloads that would fail under stricter policies, and tracks your migration progress from PSP to PSS over time. The platform generates compliance reports that map your pod security posture against frameworks like CIS Kubernetes Benchmark, giving both security teams and auditors a clear picture of enforcement status across every cluster.