Container Security

OpenShift Security Context Constraints: A Guide

SCCs predate Pod Security Admission by a decade and are more powerful. That power is also why OpenShift newcomers find them confusing.

Shadab Khan
Security Engineer
7 min read

Security Context Constraints have been in OpenShift since the 3.0 release in 2015. They predate the Kubernetes PodSecurityPolicy by a year, outlived PSP's deprecation in Kubernetes 1.21, and continue to work essentially unchanged while the upstream community moves through Pod Security Admission and the various third-party alternatives.

If you run OpenShift, SCCs are how pod security is enforced. If you come from upstream Kubernetes, they look familiar but behave differently in ways that catch people out. This guide walks through what SCCs actually do on OpenShift 4.13 and 4.14 — the current stable releases as of late 2023 — and how to work with them without spending your first six months fighting the cluster.

What an SCC Is

An SCC is a cluster-scoped resource that describes a set of constraints that a pod must satisfy to run. It answers questions like: can the pod run as root? What user IDs is it allowed to request? What capabilities can it use? Can it use hostPath volumes? Can it use host networking?

OpenShift ships with a standard set of SCCs. The ones you will encounter most often are restricted-v2, restricted, nonroot-v2, nonroot, anyuid, hostnetwork-v2, and privileged. Each one allows progressively more capability. The restricted SCCs are the default for most workloads and are named v2 because the original restricted SCC from OpenShift 3.x had a looser posture that did not survive contact with modern container security expectations.

When a pod is created, the OpenShift admission controller picks the most restrictive SCC that the pod's service account is allowed to use and that the pod's spec satisfies. This is the first big difference from Kubernetes PSA: SCC selection is automatic and per-pod, not configured per-namespace as a single tier.

The Default That Catches Newcomers

A fresh OpenShift namespace has no special SCC bindings. Pods in it run as restricted-v2, which means the pod runs as a UID allocated from the namespace's UID range — typically something like 1000650000 — not as the UID the container image specifies.

This is the thing that breaks most upstream Helm charts and most Docker Hub images on first contact with OpenShift. The nginx image expects to run as UID 101. The Redis image expects UID 999. The Postgres image expects UID 26. None of them run under restricted-v2 because they explicitly request a UID the policy forbids.

The right fix is almost never "give the workload anyuid." The right fix is to rebuild the image so it runs as arbitrary UIDs, which means ensuring the /etc and other directories it writes to are group-writable by GID 0. Red Hat's ubi base images are built this way, which is most of why they exist.

When you cannot fix the image, the compromise is to bind the workload's service account to the nonroot-v2 SCC. This allows the image's expected UID as long as it is not zero, which covers most upstream images without opening the barn door.

The v2 Distinction Matters

OpenShift 4.11 introduced the v2 variants of the restricted, nonroot, and hostnetwork SCCs. The v2 versions align with Kubernetes' "restricted" Pod Security Standards profile, which means they drop additional capabilities, require the runAsNonRoot: true security context to be set explicitly, and require seccompProfile to be set.

The non-v2 legacy SCCs are still present for backward compatibility but are discouraged. On any cluster created with OpenShift 4.11 or later, new workloads should target v2. Older clusters that upgraded from 4.10 or earlier may still have workloads implicitly using the legacy SCCs, and part of a security hardening pass is finding those and migrating them.

The capability drop difference is not academic. restricted-v2 drops ALL capabilities and re-adds only NET_BIND_SERVICE. The legacy restricted SCC drops a specific list and leaves capabilities like CHOWN, DAC_OVERRIDE, FOWNER, SETGID, and SETUID available. A container escape from a restricted pod has meaningfully more kernel-level capability available than one from restricted-v2.

Binding SCCs to Service Accounts

SCCs are authorized via RBAC. Specifically, a service account can use an SCC if it has a RoleBinding to a ClusterRole that grants the use verb on the SCC resource.

The OpenShift-provided ClusterRoles for this are system:openshift:scc:restricted-v2, system:openshift:scc:nonroot-v2, and so on. To give a specific service account access to nonroot-v2:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: allow-nonroot
  namespace: my-app
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:nonroot-v2
subjects:
- kind: ServiceAccount
  name: my-app-sa
  namespace: my-app

Scoping SCC access to specific service accounts, rather than default or broad groups, is the hygiene that matters. The default service account in a namespace should have only restricted-v2. Workloads that need more should have their own service accounts with explicit bindings, which makes the access surface visible and auditable.

privileged and hostpath-anyuid Are Not Starter Kits

The privileged SCC allows everything — host namespaces, all capabilities, any volume type, any SELinux context, any UID. It exists for specific system workloads: the cluster monitoring stack, the OVN networking components, the node logging agents.

It is tempting to bind an application to privileged when something does not work and you cannot figure out why. Do not do this. The privileged SCC is functionally equivalent to root on the node, and any container running under it is one bug away from being root on the node.

hostpath-anyuid and hostnetwork-v2 occupy a middle ground — they allow specific host-level access (hostPath volumes, host networking) but not everything. These are appropriate for workloads that genuinely need node-level interaction, like storage provisioners or node-local caching layers. They are not appropriate for application workloads.

The audit question is: for every service account bound to an SCC more privileged than restricted-v2, does the workload actually need what the SCC allows? Often the answer is no, and the binding dates from a debugging session that nobody cleaned up.

Working With Operators

Operators installed via the Operator Lifecycle Manager frequently come with their own SCC requirements. A well-behaved operator declares its SCC needs in its ClusterServiceVersion and requests minimum privilege.

The audit pattern is to check which SCCs each installed operator actually binds. oc get csv -A gives you the operators; each CSV's .spec.install.spec.permissions and .clusterPermissions describe what it wants. Look for operators that bind to privileged or anyuid and ask whether their workload components actually need that level.

A particularly common pattern is an operator that itself runs restricted but deploys workloads that require elevated SCCs. Monitoring stacks, storage operators, and networking operators all tend to do this. The SCC bindings created by the operator are typically namespace-scoped to the operator's namespace, so the blast radius is limited, but the bindings are still worth reviewing.

Pod Security Admission Coexistence

OpenShift 4.11 added Pod Security Admission support alongside SCCs. Both systems evaluate pods. When a pod fails either system's check, it is rejected.

Practically, this means a pod needs to satisfy both the applicable SCC and the namespace's PSA labels. For most namespaces, PSA is set to baseline at warn level and restricted at audit level, and SCCs do the enforcement. But if you set PSA to restricted at enforce level, you will get a second layer of rejection that does not necessarily produce clear error messages.

The right default, as of 4.14, is to let SCCs be the primary enforcer and use PSA at warn level to catch workloads that would not satisfy upstream Kubernetes' restricted standard. This surfaces portability issues without adding a redundant enforcement layer.

How Safeguard Helps

Safeguard inventories SCC bindings across OpenShift clusters and flags the patterns that indicate overprivileged workloads: service accounts bound to anyuid or privileged where the pod spec would run fine under restricted-v2, operators requesting SCCs beyond their actual need, and SCC role bindings that have drifted from the baseline without an associated change record. We correlate SCC privilege with workload sensitivity so the anyuid binding on a low-risk internal tool is less urgent than the same binding on a payment service. Our policy gates can block deployments that request SCCs above a tier-appropriate baseline, and our compliance reports translate SCC posture into the controls that CIS OpenShift, STIG, and PCI evaluations actually care about.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.