A workload that lands in a Kubernetes cluster is harder to dislodge than a dependency that lands in a codebase. It will be referenced by other services, consume secrets, accept traffic, and become part of the operational fabric within hours. The admission moment — the brief window between kubectl apply and the pod actually scheduling — is the last cheap opportunity to refuse a workload that does not satisfy supply chain requirements. Beyond that point, every refusal becomes an outage.
Most clusters today admit workloads on the basis of trivial checks: does the image exist, does the namespace exist, are the resource requests within quota. Few admission policies ask the harder questions: does the image carry a valid signature from a trusted signer, does its SBOM declare components that satisfy the organization's policy, was the image built by a pipeline whose attestation we recognize, has it been scanned recently against current vulnerability data. Those questions are exactly the ones that supply chain compromises exploit when they go unasked.
Why admission is the right place
Admission control sits between two worlds. On one side, the developer's pipeline has produced an artifact and asked the cluster to run it. On the other side, the cluster will commit resources, expose endpoints, and accept the artifact's behavior as legitimate. The admission webhook is the last neutral arbiter that can say no without anyone losing work in progress.
It is also the right place because it generalizes. A vulnerability check at PR time only sees code being changed. A build-time scan only sees the artifact being assembled. The admission gate sees every image that any pipeline tries to deploy, including images from third-party Helm charts, vendor operators, and emergency hotfixes pushed outside the normal pipeline. A supply chain policy enforced only at PR time has a hole the size of every workload that did not pass through that team's pipeline. A supply chain policy enforced at admission time closes that hole because the cluster is the chokepoint.
The cost of a wrong admission decision is also lower than the cost of a wrong runtime decision. An admission rejection prevents a pod from starting; the developer sees a clear error and can either fix the issue or request an exception. A runtime kill terminates a workload that may already be serving traffic, with cascading consequences for upstream and downstream services. Admission errs on the side of preventing harm before it happens.
What a supply chain admission policy actually checks
A useful policy goes well beyond the image must be signed. The richer set of checks looks like this:
- Signature verification. The image must carry a valid signature from a key or identity in the trusted signer set. The signer set is itself a versioned policy artifact.
- Provenance attestation. The image must include a provenance record naming the build pipeline, the source repository, and the commit hash. The pipeline must be on the allowed builder list.
- SBOM presence. The image must have an associated SBOM that the cluster can fetch and evaluate. Workloads without SBOMs are admissible only in non-regulated namespaces and only with explicit policy opt-in.
- Vulnerability freshness. The SBOM must have been evaluated against vulnerability data within a configurable freshness window — twenty-four hours for production, seven days for staging.
- License compliance. The SBOM components must satisfy the namespace's license policy. A namespace declared as closed-source-product cannot admit workloads carrying AGPL transitive dependencies.
- Compromised package check. No component in the SBOM may match the active compromised-package feed.
- Registry allowlist. The image must come from a registry on the allowlist for the target namespace. Internal registries for production, public registries only for sandbox.
- Runtime hardening. The pod spec itself must satisfy hardening rules: non-root user, read-only root filesystem, no privileged mode, no host network, no host PID, capabilities dropped to the minimum required.
Each of these is a separate rule, separately togglable, with its own rollout phase. Mixing them all into one monolithic policy makes the system harder to reason about and harder to evolve.
Failure modes to design around
Admission webhooks have specific failure modes that policy authors need to anticipate.
Latency. Every admission decision blocks the apply call. A policy that calls out to a remote service for vulnerability data on every admission will introduce noticeable delay. Admission policies need to operate against pre-computed verdicts whenever possible — the pipeline produces an attestation that the cluster verifies, rather than the cluster recomputing the verdict from raw data.
Webhook unavailability. If the webhook is down, the cluster has to choose between failing open (admit everything) and failing closed (admit nothing). Failing open is unsafe; failing closed can prevent emergency deploys during the worst possible moment. The pragmatic answer is to fail closed for production namespaces and fail open with an audit log entry for non-production, with a redundant webhook deployment so this rarely happens.
Bootstrap and upgrade. Cluster components, control plane, and operators all need to deploy somehow. A policy that blocks unsigned images cannot block the very components that implement the policy. Bootstrap exemptions need to be narrow, named, and reviewed — a system namespace exemption is acceptable, a wildcard exemption is not.
Helm and operator workloads. Many third-party charts and operators ship images that do not satisfy strict supply chain policies. The policy needs a way to onboard these — usually through a policy-managed allowlist with explicit per-image entries — rather than dropping the policy entirely because a single chart refuses to comply.
Coordinating with earlier gates
Admission policies are most useful when they share their logic with PR-time and build-time gates. The same rule that flags a missing signature at PR time should be the rule that refuses the workload at admission. Otherwise, two failure modes appear: the policy drifts between gates, and the developer experience splits — a build that passed CI gets refused at deploy with a different error message and a different rationale.
The fix is to express policy in one place and let it drive all the gates. PR time evaluates against the proposed manifest. Build time evaluates against the assembled artifact. Admission time evaluates against the deployed workload. Runtime evaluates against the running behavior. The rule definitions are shared; only the inputs differ.
How Safeguard Helps
Safeguard ships a Kubernetes admission webhook that reads from the same policy definitions used by the PR-time and build-time gates, so a supply chain rule lives in one place and applies consistently across the pipeline.
At PR time, Safeguard evaluates dependency manifests, Dockerfiles, and Kubernetes manifests against the policy set, surfacing missing signatures, non-compliant base images, and prohibited pod specs as PR comments before merge.
At build time, Safeguard generates and stores the SBOM, signs the artifact, and produces the attestations the admission webhook will later verify. The build pipeline is the producer of the evidence the cluster consumes.
At admission time, the Safeguard webhook receives the workload spec, fetches the image's SBOM and attestations, and evaluates the active policy set: signature validity, provenance, SBOM freshness, vulnerability status, license compatibility, registry allowlist, and pod hardening. Failed admissions return a structured error naming the rule, the offending field, and the documented override path. Bootstrap exemptions are narrow and auditable.
At runtime, Safeguard watches admitted workloads for drift — unexpected processes, unsanctioned outbound traffic, or packages appearing inside a container that were not present in its admitted SBOM — and feeds those signals back into policy decisions for future admissions.
The result is that the moment a workload tries to enter the cluster, every supply chain question worth asking is asked, the answer comes from the same policy definitions developers saw at PR time, and overrides flow through a recorded channel rather than being granted by whoever has cluster admin. The cluster becomes the place where supply chain policy stops being a recommendation and starts being a precondition for running.