A vulnerable image shouldn't get to run in your cluster just because the CI gate didn't catch it — CI and cluster are different layers, and you want defense-in-depth. The Safeguard admission controller is a validating webhook that intercepts pod creation, looks up the image's SBOM and advisory posture, and allows or denies based on policy. This walkthrough installs it, configures it, and wires it into your first namespace.
What Exactly Does the Admission Controller Check?
On every Pod admission, it validates four things: (1) the image digest is attested in the Safeguard portal with a valid provenance chain, (2) the image's SBOM has no open findings above the configured severity, (3) the image signature passes verification against the expected signing identity, and (4) the deployment manifest passes a policy check (privileged mode, required labels, etc.). Anything failing any of these is rejected with a clear message pointing to the remediation.
Step 1: Install via Helm
The controller ships as a Helm chart. Add the repo and install into a dedicated namespace:
helm repo add safeguard https://charts.safeguard.sh
helm repo update
kubectl create namespace safeguard-system
helm install safeguard-admission safeguard/admission \
--namespace safeguard-system \
--set auth.token=$SAFEGUARD_TOKEN \
--set policy.mode=warn \
--set webhook.failurePolicy=Ignore
Start in warn mode with failurePolicy=Ignore. You don't want the admission controller blocking pods on day one — you want to see what it would have blocked, fix the highest-priority items, then switch to enforcement.
Step 2: Verify the Webhook Is Registered
kubectl get validatingwebhookconfiguration safeguard-admission
kubectl -n safeguard-system get pods
kubectl -n safeguard-system logs deploy/safeguard-admission
The pods should be Running, and the logs should show webhook listening on :8443 along with a list of the policies loaded. If the webhook isn't receiving admission requests, check that your API server can reach the service — on managed Kubernetes this usually just works; on self-hosted setups you may need to add the service CIDR to the API server's egress allowlist.
Step 3: Deploy a Test Pod
Deploy something known-bad:
apiVersion: v1
kind: Pod
metadata:
name: safeguard-test
namespace: default
spec:
containers:
- name: app
image: ghcr.io/vulhub/log4j-cve-2021-44228:latest
Apply it. In warn mode the pod will be admitted, but you'll see a warning event:
Warning SafeguardPolicy 15s safeguard-admission
Image has 3 critical advisories; would be blocked in enforce mode.
See https://app.safeguard.sh/images/sha256:abc123...
The event link opens the portal at the image's risk profile — same data CI sees.
Step 4: Configure Policy
Edit your values file:
policy:
mode: warn
defaults:
failOn: critical
requireSbom: true
requireSignature: true
allowedRegistries:
- ghcr.io/mycorp
- registry.internal.mycorp.com
- public.ecr.aws/docker/library
exemptNamespaces:
- kube-system
- safeguard-system
- external-dns
perNamespace:
production:
failOn: high
requireSignature: true
requireAttestation: true
staging:
failOn: critical
sandbox:
failOn: off
The perNamespace block is the most valuable feature. Production can require attestation; sandbox can be permissive. Re-apply the chart with helm upgrade to push the new policy.
Step 5: Move to Enforce Mode
Once the warning stream has quieted down — meaning your team has either remediated or documented ignores for everything that would block — switch modes:
helm upgrade safeguard-admission safeguard/admission \
--namespace safeguard-system \
--reuse-values \
--set policy.mode=enforce \
--set webhook.failurePolicy=Fail
failurePolicy=Fail means the webhook itself being unavailable is a hard deny. That's the right choice for production — if your security control is down, you don't keep admitting workloads. But it requires that the admission controller itself is genuinely highly available. Run at least three replicas and consider spreading them across zones.
Step 6: Wire Up Signature Verification
Signatures are verified against a trust policy. Create one:
# trust-policy.yaml
version: v1
policies:
- name: internal-images
pattern: "registry.internal.mycorp.com/*"
requiredIdentities:
- subject: "https://github.com/mycorp/*/.github/workflows/release.yml@refs/tags/*"
issuer: "https://token.actions.githubusercontent.com"
- name: approved-base-images
pattern: "public.ecr.aws/docker/library/*"
requiredIdentities:
- subject: "https://github.com/docker-library/official-images/*"
issuer: "https://token.actions.githubusercontent.com"
Apply it with kubectl -n safeguard-system create configmap safeguard-trust --from-file=trust-policy.yaml. The controller reloads the trust policy automatically on change.
How Do I Handle Images That Don't Have SBOMs Yet?
Two options, depending on how strict you want to be. The lenient option is to set requireSbom: false for a transition period — the controller will still check advisories against whatever SBOM the Safeguard portal has cached or can generate from a pull, but won't hard-require one. The strict option is requireSbom: true with a Safeguard-side automatic SBOM generation for unknown images:
policy:
defaults:
requireSbom: true
generateMissingSbom: true
generateTimeout: 30s
The controller will trigger a backfill: pull the image, generate an SBOM via the Safeguard backend, and then re-evaluate. This adds first-admission latency for unknown images — cache warms after that — so enable it only if your tolerance is there.
How Do I Debug a Denied Pod?
Three places to look. First, the API server response to kubectl apply — the denial message is included and usually sufficient. Second, the controller logs:
kubectl -n safeguard-system logs deploy/safeguard-admission --tail=200 \
| grep "$(kubectl get pod <pod-name> -o jsonpath='{.metadata.uid}')"
Third, the portal — every admission decision is logged to https://app.safeguard.sh/clusters/<cluster-id>/admissions with the full JSON of the admission request, the policy that was evaluated, and the decision rationale. This is the single best audit trail when a developer asks "why did my deploy get rejected yesterday?"
How Do I Handle Emergency Overrides?
Never use kubectl patch validatingwebhookconfiguration to disable the webhook — that's a nuclear option that leaves the cluster in an uncontrolled state until someone remembers to turn it back on. Instead, use annotated overrides:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hotfix
annotations:
safeguard.sh/override: "INCIDENT-4821"
safeguard.sh/override-approver: "ciso@mycorp.com"
safeguard.sh/override-expires: "2026-04-01T00:00:00Z"
Overrides are allowed only for specific RBAC-bound subjects (configured via policy.overrideRbac), every override is logged with the ticket reference, and safeguard.sh/override-expires enforces a hard cap. Unlike a disabled webhook, an override is auditable and self-expiring.
How Do I Handle High-Traffic Clusters?
Two performance levers. First, the image evaluation cache — the controller caches decisions per image digest, so repeated admissions of the same image are instant:
cache:
enabled: true
ttl: 600s
maxEntries: 20000
Second, side-car mode for very high pod-creation rates. The controller can run as a per-node DaemonSet alongside the kubelet, evaluating admissions locally with a warm cache rather than going over the network to a central service. This reduces p99 admission latency dramatically on clusters doing thousands of pod admissions per minute.
How Do I Integrate With Existing Policy Engines?
Safeguard admission is supply-chain focused, not a general-purpose policy engine. If you're already using OPA Gatekeeper or Kyverno for resource-shape policy (no-privileged, required-labels, etc.), keep them. Run Safeguard as a separate validating webhook with a narrower policy scope. The two will coexist without conflict — Kubernetes evaluates all registered webhooks in parallel, and any one denial blocks the request.
How Safeguard.sh Helps
The admission controller is the last line of defense — the one place a compromised image must pass through before it runs. Because Safeguard admission shares one policy surface with CI and the IDE, a developer who saw a clean scan locally and a green PR check can reasonably expect their deploy to succeed. When it doesn't, it's because something changed between merge and deploy (a late-breaking CVE disclosure, an upstream base image moving, a signature expiring) — and that's exactly when you want the admission controller to catch it. Add it to production namespaces first, keep sandbox permissive, and enforce consistently across staging and prod once the warning stream is clean.