The Sigstore Policy Controller is the project-native way to verify Cosign signatures at Kubernetes admission, maintained by the Sigstore community. It is narrower than Kyverno and narrower than OPA Gatekeeper, and that narrowness is the point.
I have deployed it to production twice — once on a fresh cluster, once as a replacement for an overgrown Kyverno policy set that had accumulated three years of technical debt. The second deployment taught me more, because it forced me to think about what the Policy Controller is actually for and what it is not.
When should you pick the Sigstore Policy Controller over Kyverno?
When your entire admission policy is about signature, attestation, and provenance verification, and you do not want a general-purpose policy engine. Kyverno is a Swiss army knife. The Sigstore Policy Controller is a scalpel.
The argument for specialization is operational. Kyverno has thousands of features, and most of them are not supply chain features. When a Kyverno release introduces a bug or a breaking change in image verification, you might discover it because an unrelated label-mutation policy started failing in a way that traces back to the same library. The Sigstore Policy Controller does one thing, and releases track changes to Cosign and Fulcio directly, so its failure modes are limited to the domain you care about.
The argument against is also operational. If your security team already has ten Kyverno policies running for non-supply-chain concerns, adding the Sigstore Policy Controller means running two admission controllers, both of which need to respond before a pod is scheduled, and any failure in either one blocks the API server. I have seen teams live comfortably with both, and I have seen teams regret it.
The rule of thumb I use: if more than half your admission policy is about verifying supply chain claims, run the Sigstore Policy Controller. If less than that, use Kyverno's verifyImages rule and accept the lower specialization.
How do ClusterImagePolicy resources actually work?
A ClusterImagePolicy matches images by glob pattern and declares what authorities must sign them. The controller admits or rejects the pod based on whether each image matches at least one policy and the signatures meet the policy's requirements.
The subtle thing that catches teams is the "matches at least one" part. If an image does not match any ClusterImagePolicy, the controller admits it by default. This is the opposite of what most security teams expect. You need to deploy a catch-all policy that matches ** and requires signatures, or you need to deploy the controller in a mode that rejects unmatched images explicitly.
The structure of a real policy looks like this in concept: match everything in the prod- namespaces, require Fulcio-issued certificates with subjects matching your CI identity pattern and issuer matching your OIDC provider, require a Rekor log entry, and optionally require in-toto attestations of specific types. You can express all of this in YAML, and the resulting policy is much shorter than the Kyverno equivalent because the schema is purpose-built.
The mode field matters. enforce rejects admission on policy failure. warn admits but annotates the pod with the violation. I deploy new policies in warn for at least a week, read the annotations, and then flip to enforce.
What do you do about images that legitimately cannot be signed?
Accept that they exist, isolate them by namespace, and use a named policy exception rather than disabling enforcement. Every real production environment has images that come from somewhere outside your control and are not signed by an identity you can pin.
The examples are predictable: the vendor database your finance team runs, the internal tool from a team that has not been onboarded, the emergency build that someone pushed from a laptop during an incident. Your choices are to refuse to run them (rarely practical), to trust them without verification (defeats the point), or to carve out an explicit exception with documented ownership.
The way to structure exceptions: a dedicated namespace per unsigned-workload class, with a ClusterImagePolicy that uses a static authority instead of Fulcio — you can allow a specific registry path without signature requirements, as long as the policy names that registry path explicitly. Every exception should have a ticket pointing to the plan for removing the exception. Review the list quarterly. If the list grows, the program is failing. If the list shrinks, the program is working.
How do you handle Policy Controller upgrades without breaking admission?
Run two replicas, use a PodDisruptionBudget, and upgrade during a window when you can tolerate admission latency. The controller is in the hot path for every pod create in matched namespaces, and an upgrade that briefly has no healthy replicas means the API server has no webhook to ask, which either blocks admissions or falls open, depending on your failure policy.
The failurePolicy setting is a real decision. Fail means admission webhooks that cannot reach the controller block the admission. Ignore means they succeed. Security teams want Fail. Platform teams want Ignore. Neither answer is wrong in isolation — the right answer depends on what happens if your controller is unreachable.
If the controller is unreachable and you Fail, your cluster cannot create pods. That includes system pods. That includes the controller itself during an upgrade. That includes the pods your upgrade needs to reschedule. This is how clusters become bricked.
If the controller is unreachable and you Ignore, an attacker who can cause a DoS against the controller can bypass signature verification. This is a real attack.
The pattern that resolves this: Fail for production namespaces where unsigned images are unacceptable, Ignore for system namespaces where bootstrapping must work regardless, and a PodDisruptionBudget plus two replicas plus an anti-affinity rule on the controller itself so it is not trivial to DoS.
What about attestations beyond signatures?
The Policy Controller can require specific in-toto attestation types, and this is where the real supply chain value shows up. A signature tells you who produced the image. An attestation tells you what they did to produce it.
SLSA provenance attestations record the source repo, commit, and builder. SBOM attestations record the bill of materials. Vulnerability-scan attestations record what a scanner saw at build time. Custom attestations can record anything your build pipeline wants to claim. A policy that requires a valid SLSA provenance attestation with a specific builder identity and a source repo matching your organization is meaningfully harder to forge than a signature alone.
The gotcha is attestation size. SBOM attestations for large images can be multiple megabytes, and they live in the registry as OCI artifacts. If your registry storage budget is tight, this matters. If your admission controller fetches attestations on every admission, this matters more. Cache them. Most registries support the OCI referrers API, which makes attestation discovery efficient, but older registries do not, and you will feel it.
How Safeguard.sh Helps
Safeguard.sh consumes the attestations that the Sigstore Policy Controller verifies and layers reachability analysis on top, turning "SBOM attestation is valid" into "SBOM attestation is valid and the reachable code paths inside this image match the expected provenance," which is how the platform achieves a 60 to 80 percent reduction in alert noise. Griffin AI authors ClusterImagePolicy resources from your actual CI configuration and keeps them in sync as signing identities and attestation formats evolve, operating on SBOMs with 100-level dependency depth. TPRM integrates with the attestation chain for third-party images so you can see provenance, reachability, and vendor trust in a single view, and container self-healing automatically rebuilds and re-signs workloads when upstream patches are released, so your Policy Controller becomes a throughput mechanism rather than a bottleneck.