DevSecOps

GCP Binary Authorization Real-World Deployment

Binary Authorization works in production, but the rollout pattern is not obvious. This is the real-world deployment guide for 2026 GCP estates.

Nayan Dey
Senior Security Engineer
8 min read

Binary Authorization is one of the few GCP security features that does what it says on the tin. The deploy-time enforcement works, the audit logs are clean, the integration with GKE, Cloud Run, and Cloud Functions is consistent. The catch is that the public documentation is dense and the rollout pattern is not obvious to a reader who has not done it before. Most teams I have helped roll out Binary Authorization have spent more time on the rollout strategy than on the actual configuration.

This is the real-world deployment guide for 2026. It assumes you have a Cloud Build pipeline that can produce signed artifacts, an Artifact Registry repository, and at least one GKE cluster, Cloud Run service, or Cloud Functions deployment that consumes the artifacts. It walks through the rollout phases, the attestor design choices, and the operational details that the platform documentation does not call out clearly.

What is the right attestor design for a multi-environment GCP estate?

One attestor per signing pipeline, with a separate attestor for the SBOM-attached attestation. Two attestor families means the policy can require both a build attestation and an SBOM attestation, and the failure modes are distinguishable: a missing build signature is a different problem from a missing SBOM, and the policy log makes it clear which one failed.

The per-pipeline attestor pattern is the part most rollouts get wrong. A single attestor for the whole organization is operationally simple and a security liability — a compromise of any pipeline that uses the attestor compromises every artifact the attestor has signed. A per-pipeline attestor contains the blast radius and gives auditors a clean answer to "which pipeline produced this artifact." The cost is one attestor per pipeline, plus the IAM bindings for each, which is trivial in Terraform.

The signing key for each attestor lives in Cloud KMS, with a key ring per environment and a key per pipeline. Production keys never leave the production project. Development keys never leave the development project. The KMS audit log is the corresponding evidence trail; every kms.cryptoKeyVersions.sign call is recorded with the principal, the source, and the resource, and the principal must be the build pipeline's service account.

How do I plan the rollout phases?

Three phases: dry-run, enforce-non-critical, enforce-everywhere. The dry-run phase enables Binary Authorization on every relevant target with the policy in dry-run mode, which logs failures without blocking. The output of this phase is a complete inventory of deploy paths that would fail enforcement. Each failing path is a work item.

The enforce-non-critical phase flips the policy to enforce on a subset of clusters and services that are explicitly not critical — development, staging, and any production workload whose unavailability is recoverable in minutes. The output of this phase is a confidence signal that the enforcement does not break ordinary developer workflow. If the signal is bad, you stay in this phase until it is good.

The enforce-everywhere phase flips the policy to enforce on the remaining critical production targets. By the time you get here, the previous phases have driven the failure count to zero or near zero, and the move to full enforcement is uneventful. If the failure count is non-zero on a critical target, you do not move that target until it is.

The exception list management is the part that needs continuous attention. Every legacy image in the exception list is a work item, and the deadline on the list is the forcing function that gets the work done. Without a deadline, the exception list becomes permanent, and the policy provides theatre rather than security.

How do I configure the GKE side without breaking the cluster?

Enable Binary Authorization on the cluster, configure the policy to require the per-pipeline attestor, and set the default rule to "require-attestation." The default rule is the catch-all for images the policy does not match more specifically, and the safest default is to require attestation rather than to allow without verification.

The cluster-level configuration is one Terraform attribute and a few lines of policy YAML. The policy supports per-namespace exception rules, which is how the rollout handles workloads that legitimately need an unsigned image during the migration. The namespace exception rules are a managed list, audited, and ideally tied to a ticket per exception.

The continuous validation feature is the part most teams overlook. It re-evaluates the policy against running pods on a schedule, not just at admission time. A pod whose attestation has been revoked or whose image has been re-tagged is flagged, even though the pod was admitted under the old policy. The continuous validation log is a signal worth wiring into the security operations center.

What about Cloud Run and Cloud Functions?

Both services support Binary Authorization with a similar policy model to GKE, with a per-service or per-function configuration that references the same attestors as the cluster policy. The deploy-time verification happens before the service or function accepts the new revision, and a verification failure prevents the new revision from going live.

Cloud Run's revision management gives a clean rollback story if a deploy fails verification: the previous revision continues to serve traffic, and the failed deploy is recorded in the service's revision history. The configuration uses the same attestor names as the GKE policy, so a single signing pipeline produces an artifact that is deployable to GKE, Cloud Run, and Cloud Functions under the same attestation.

How do I handle base image updates and patch rollouts?

The patched base image goes through the same signing pipeline as any other image, the new attestation is attached, and the deploy targets verify and admit the new image normally. The Binary Authorization model does not have a special path for patches; the signing pipeline is the path, and the patch is just another build.

The container self-healing pattern complements this nicely. When a base image is updated to resolve a CVE, the rebuild pipeline runs automatically, the new image is signed, and the deployments that reference the old image are updated to reference the new one. The Binary Authorization policy admits the new image because the attestation verifies, and the old image is eventually pruned from the registry.

For workloads where the patch path needs human approval, the approval gate is implemented in the pipeline rather than in Binary Authorization itself. Binary Authorization is the verification layer; the approval is the change-management layer. Mixing them tends to produce a confusing policy that is hard to debug.

How do I make the audit story work for compliance?

The Binary Authorization audit log captures every admission decision. The KMS audit log captures every signing operation. The Cloud Build audit log captures every build with its source commit. Joining these three logs gives the end-to-end trace that compliance frameworks ask for, with the image digest as the join key. A small dataflow job that normalizes the three log streams and writes them to a unified table is the operational pattern that makes the audit query fast.

How Safeguard Helps

Safeguard verifies Binary Authorization attestations on every Artifact Registry image at scan time, regardless of whether the image is currently deployed, and produces a continuous posture signal that maps attested images to deployed workloads. The signal flags images that are deployed but not attested, attested by an unrecognised attestor, or attested by an attestor whose signing key has been rotated.

Griffin AI generates per-pipeline attestors, KMS signing keys, GKE cluster policies, and Cloud Run and Cloud Functions deployment configurations from an existing GCP estate, then opens merge requests against the customer's IaC repo to roll the rollout out. The MR-driven workflow makes the rollout tractable in projects with dozens of clusters and hundreds of services.

Safeguard's SBOM module produces the SBOM attestation that pairs with the build attestation, the TPRM module tracks the trust state of every base image and dependency the build consumes, and the policy gate module integrates with Cloud Build approvals so a build approval can be wired to a Safeguard policy decision. The phase-progression dashboard is built into the platform, so the rollout's progress is data-driven rather than guesswork.

For multi-project GCP organizations, Safeguard rolls up Binary Authorization posture across all projects, with a single view of which workloads are out of policy, which deploy paths still bypass attestation, and which exception list entries are past their deadline. The reachability scoring on attested images cuts noise dramatically, freeing engineering time for vulnerabilities that actually affect deployed workloads.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.