Sigstore Policy Controller is the Kubernetes admission controller that enforces signature and attestation policies on container images at deploy time. It is the production gate between a signed image in your registry and a running pod in your cluster, and the v0.15 release line in 2026 brought the project to monthly minor releases and integrated the sigstore-go TUF client for delegation-aware target retrieval. v0.15.1 shipped March 26, 2026 with the fixes that made the new TUF path production-ready. We rolled v0.15.1 onto a 400-node EKS cluster running 1,800 pods of mixed in-house and vendor images and tracked admission latency, failure modes, and policy-author ergonomics.
What changed in Policy Controller v0.15?
The most visible change is the TUF integration. Earlier Policy Controller versions used the legacy TUF client baked into the cosign internals; v0.15 moves to sigstore-go's TUF client, which understands TUF delegations and can resolve targets that are signed by delegated roles rather than only by the top-level Sigstore root. In practice this means private Sigstore deployments (the most common pattern in regulated industries) work correctly out of the box — you can run a private Rekor and Fulcio, delegate trust to your enterprise root, and Policy Controller resolves the chain without bespoke configuration. The v0.15.1 patch fixed a regression in the initial delegation lookup that caused a one-time admission stall on cold-start.
Beyond TUF, the project moved to a published monthly cadence for minor releases, with patch releases between as needed. This matters because the cosign 3.x line is on a similar cadence and the two projects need to stay in lockstep — Policy Controller v0.15 expects cosign 3.0 bundle format and rejects v2 bundles without a compatibility annotation.
How fast is admission with v0.15?
Admission latency is the operational metric that matters most. If Policy Controller takes 800ms per pod, your cluster will feel sluggish during deployments; if it takes 80ms, no one notices. We measured admission for three image types — in-house signed image, vendor signed image (Distroless), and unsigned image (expected reject) — across 5,000 deployments.
| Image type | p50 | p95 | p99 | |---|---|---|---| | In-house, signed, Rekor-backed | 92ms | 138ms | 184ms | | Distroless (gcr.io/distroless/static) | 117ms | 162ms | 217ms | | Unsigned (rejected) | 41ms | 58ms | 79ms | | Private-Sigstore delegated | 109ms | 167ms | 241ms |
The p99 numbers are the ones to plan capacity around. At 1,800 pods, with a steady deploy cadence, Policy Controller comfortably handles the workload — but during a horizontal rollout (1,000 pods restarted in a window), the admission queue spiked to 4-second tail latency. The fix is to scale the Policy Controller replicas; the chart default of 2 was insufficient for our scale and 6 replicas gave us p99 under 250ms under load.
How do you write a Policy Controller policy in 2026?
Policy Controller uses CIP (ClusterImagePolicy) resources to bind image selectors to verification requirements. A representative 2026 policy that requires a Cosign-signed image with a valid SLSA provenance attestation and is part of the namespace label opt-in.
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
name: require-signed-and-attested
spec:
images:
- glob: "registry.example.com/prod/*"
authorities:
- name: keyless-signature
keyless:
identities:
- issuer: https://token.actions.githubusercontent.com
subjectRegExp: "^https://github\\.com/example/.*/\\.github/workflows/.+@.+$"
ctlog:
url: https://rekor.sigstore.example.com
attestations:
- name: must-have-slsa
predicateType: https://slsa.dev/provenance/v1
policy:
type: cue
data: |
predicate: {
buildDefinition: {
buildType: "https://example.com/build-types/github-actions@v1"
}
runDetails: {
builder: {
id: =~"^https://github\\.com/example/.*"
}
}
}
mode: enforce
---
# Namespace opt-in
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
policy.sigstore.dev/include: "true"
The CUE-based attestation policy is the part that surprises new operators. CUE is precise and verifiable but unfamiliar to most platform engineers; Rego (the Kyverno default) is more widely known. There is a long-running discussion about whether to add Rego as a first-class option in Policy Controller; for now CUE is what you get.
How does Policy Controller compare to Kyverno and Connaisseur in 2026?
The image-verification admission category has three credible open-source options.
| Capability | Policy Controller v0.15 | Kyverno 1.18 | Connaisseur 3.x | |---|---|---|---| | Cosign signature verification | Yes | Yes | Yes | | Cosign keyless / Fulcio | Yes | Yes | Yes | | Sigstore bundle format v0.3 | Yes | Partial | Yes | | SLSA attestation policies | Yes (CUE) | Yes (Rego/JMESPath) | Limited | | Notary v2 / notation | No | Yes | No | | Image digest mutation | No | Yes | Yes | | Policy ergonomics | CUE (steep) | Rego/JMESPath (familiar) | YAML-only (simple) | | Best for | Sigstore-first orgs | Broader policy program | Lightweight enforcement |
Kyverno is the right pick if you also want a broader policy program — Kyverno is what you reach for when image verification is one of many things you want admission to enforce. Policy Controller is the right pick if you are Sigstore-deep and want the project that tracks cosign most closely. Connaisseur is the right pick if you want a small, focused enforcement tool that does one thing well.
What are the operational gotchas?
Two. First, the namespace opt-in label is easy to forget — admission only runs on namespaces with the policy.sigstore.dev/include: "true" label, so a new team that spins up a namespace bypasses the policy until labeled. Build the label into your namespace bootstrap script. Second, the TUF root refresh on policy startup is slow on first run (3-12 seconds) because the controller fetches the full TUF metadata. Pre-cache the root in the controller image for fast pod restarts, or accept the cold-start cost for the safety it provides.
What does the multi-tenant operator model look like?
For platform teams running Policy Controller across many namespaces with different policy regimes per tenant, the namespace-opt-in label is your friend and your trap. The friend: it lets you onboard tenants gradually and keeps experimental namespaces out of enforcement scope. The trap: a tenant that forgets the label silently bypasses the policy, and the failure mode is invisible until someone runs an audit. The pattern that worked best in our deployment: a controller-side admission webhook on namespace creation that injects the policy.sigstore.dev/include: "true" label by default unless the namespace carries an explicit opt-out label. That inverts the safety default — namespaces are protected unless they explicitly aren't, rather than unprotected unless they explicitly are. The change is a 40-line ValidatingAdmissionWebhook and the audit posture improvement is significant. Combine it with a quarterly review of any namespace with the opt-out label and you have a defensible, scalable Policy Controller deployment.
How Safeguard Helps
Safeguard runs the policy upstream of Policy Controller, not as a replacement. Policy Controller is the runtime gate that says "this image is signed"; Safeguard answers "this image is signed AND its SBOM has no critical reachable vulnerabilities AND the build provenance traces to an approved repo." The platform ingests Policy Controller admission decisions and the corresponding SLSA provenance, correlates them with SBOM and CVE state, and writes a single timeline entry per deployment — from commit, to build, to signed image, to admitted pod. Griffin AI flags drift: if Policy Controller admitted an image because the signature was valid but the SBOM evidence is stale, Safeguard surfaces the gap and proposes a re-attestation step. Policy Controller enforces at admission; Safeguard makes the enforcement explainable.