Binary Authorization on Google Cloud started as a binary admission gate: either an image had a signed attestation from a designated attestor, or it did not deploy to GKE. That model worked for small organizations with disciplined release pipelines and produced misery for everyone else. Through 2025 and into 2026, the service matured toward a richer attestation-verifier model where multiple attestations of different types compose into a deployment decision, and breakglass is the exception rather than the routine. For defenders, this is good news. For the engineering teams implementing it, the failure modes have moved from "you cannot deploy" to "you are deploying things you did not intend." The new mistakes are subtler and easier to miss in review.
What does an attestation actually carry?
A Binary Authorization attestation is a signed assertion, in the in-toto attestation format, that some predicate is true about a specific OCI image identified by its digest. The predicate can be anything — "this image passed the vulnerability scan," "this image was built from the main branch of repository X by build job Y," "this image has been reviewed by a member of the security team" — and the signer is identified by a public key registered in Container Analysis or, increasingly in 2026 deployments, by a Sigstore Fulcio identity. The policy then says: to deploy to this cluster, an image must have attestations of types A, B, and C from signers in groups X, Y, Z. The richness comes from having multiple attestations layered: a build-provenance attestation from the CI system, a scan-result attestation from the vulnerability scanner, a deployment-approval attestation from a CAB workflow, all required together for production but not staging.
How should policy be scoped across clusters and teams?
The temptation is to write one organization-wide policy and apply it to every cluster. This collapses when teams have legitimately different requirements — a payments cluster that needs stricter attestation than a data-engineering sandbox, an ML training cluster that runs base images from public hubs, a confidential GKE node pool that requires hardware-attestation-bound signatures. Binary Authorization supports per-cluster policies and a default policy, but the better pattern is policy templating: a small number of named policy archetypes (sandbox, internal-prod, customer-prod, regulated-prod) defined once and applied to clusters by tag. The archetype encodes the required attestation types, the trusted signers, the breakglass posture, and the exemption list. Cluster-specific deviation should be explicit and approved, not implicit through ad-hoc edits.
What goes in the exemption list, and what does not?
Binary Authorization exemptions are images permitted to bypass the policy. The legitimate uses are: bootstrapping a new cluster before attestation infrastructure is in place, system images from Google itself that come from trusted internal registries, and time-limited break-glass for incident recovery. The illegitimate use that creeps in is "this team's images are not signed yet so let us exempt their registry path." Once that exemption exists, it never gets removed, and the team that "will sign next quarter" still has not signed two years later. The discipline that works is: exemptions are configuration as code, reviewed in pull requests, with a mandatory expiration date and an owner. An exemption past its expiration date should fail policy validation and break the next deploy until either renewed with a fresh review or removed.
admissionWhitelistPatterns:
- namePattern: gke.gcr.io/*
- namePattern: gcr.io/gke-release/*
defaultAdmissionRule:
evaluationMode: REQUIRE_ATTESTATION
enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
requireAttestationsBy:
- projects/myorg-attestors/attestors/build-provenance
- projects/myorg-attestors/attestors/vuln-scan
- projects/myorg-attestors/attestors/sec-review
How does breakglass actually work in practice?
Breakglass annotations on Kubernetes objects bypass Binary Authorization for that specific deployment, logging the bypass for later review. The intent is to handle the case where a production fix is needed and the normal attestation pipeline is unavailable. The reality, in organizations that do not actively manage this, is that breakglass becomes routine: a team annotates every deploy "just in case," and the audit log accumulates thousands of bypasses that nobody reviews. The control that works is to alert on every breakglass event in real time to the security team's on-call queue, require an incident ticket reference in the annotation, and reconcile the breakglass audit log against incident tickets weekly. Any bypass not tied to an active incident is a finding; any team with more than a small number of bypasses per quarter needs a conversation about why the normal pipeline does not work for them.
What does Sigstore integration change?
Binary Authorization's 2025-era support for Sigstore-style keyless attestations meaningfully reduces the operational overhead of attestor management. Instead of issuing and rotating per-attestor signing keys, an attestor identity is a short-lived certificate from Fulcio bound to a workload's OIDC token. The build provenance attestation is signed by a certificate whose subject is the repository and workflow that produced it, and Rekor's transparency log provides the discoverable record of which attestations were issued when. The supply chain benefit is large: a signing key cannot be exfiltrated because it does not durably exist, and signatures from a compromised CI configuration are visible in the transparency log without requiring the attacker to be sloppy. The migration path is to add Sigstore as an additional permitted attestor alongside the existing keyed attestor, validate that real builds produce valid Sigstore attestations, and gradually shift policy to require Sigstore as the source of build provenance attestations.
How do you debug the most common policy failures?
The failure modes that consume the most operator time are: an image deploys successfully but with a different digest than expected because the registry's mutable tag was updated between the build and the deploy, an attestation exists but for the wrong digest because a retag operation broke the binding, and an attestation is valid but the attestor's public key is not in the cluster's trusted set because a policy update missed a key rotation. The debugging discipline that works is to require all production deploys to reference images by digest rather than tag (this also makes rollback predictable), to verify attestations during the build step rather than discovering at deploy time that they are missing, and to keep an audit dashboard showing the rate of policy-blocked deploys per team. A team whose blocked-rate suddenly jumps is usually rebuilding incorrectly, not under attack — but the dashboard surfaces both.
How do you handle multi-region and DR clusters?
Production Binary Authorization deployments rarely have a single cluster. The common shape is a primary cluster, one or more standby clusters in different regions for disaster recovery, and a separate cluster for the security tooling itself. Each cluster has its own policy, and the temptation to make the policies identical clashes with the operational reality that DR clusters often need looser policies during cutover — a regional GCP outage that forces traffic to a standby is not the moment to be debugging attestor configuration. The pattern that works is to make the policies identical in normal operation, to define a documented "DR mode" policy variant that loosens specific verifications (for example, accepting attestations signed slightly outside the normal freshness window), and to gate the switch from normal to DR mode on an incident commander approval recorded in audit logs. The DR mode policy must auto-expire and revert to normal after the incident, which requires the switch to be implemented as a time-bounded change rather than a permanent edit. Without that discipline, "DR mode" silently becomes the default policy because nobody remembers to switch it back.
How Safeguard Helps
Safeguard inventories every GKE cluster, its Binary Authorization policy, the registered attestors, and the exemption list, comparing them against your archetype definitions and flagging drift. Policy gates block infrastructure-as-code changes that introduce new exemptions without expiration, lower enforcement modes from ENFORCED_BLOCK_AND_AUDIT_LOG to dry-run, or add untrusted attestor identities to production policies. Griffin AI correlates every Binary Authorization bypass with the breakglass annotation, the incident ticket it claims to reference, and the user who deployed — so the weekly reconciliation that traditionally takes hours becomes a query you can answer in seconds. Continuous monitoring of attestation transparency logs and attestor key health turns Binary Authorization from a deployment gate into an end-to-end provenance program that you can defend in front of auditors and incident responders alike.