DevSecOps

Multi-Cloud Supply Chain Control Plane

A multi-cloud estate needs a single control plane for supply chain policy. This is what one looks like across AWS, Azure, and GCP in production in 2026.

Shadab Khan
Security Engineer
8 min read

The single-cloud supply chain story is well-trodden. AWS has CodePipeline, ECR, and Signer. Azure has DevOps, ACR, and notation signing. GCP has Cloud Build, Artifact Registry, and Binary Authorization. Each cloud has its own primitives, its own audit log shape, and its own policy syntax. The single-cloud teams can pick a stack and run with it.

Multi-cloud teams cannot. A multi-cloud estate inherits all three primitive sets, all three audit shapes, and all three policy languages, and the security team has to produce a single supply chain story that works across all of them. The 2026 reality is that most large enterprises now have meaningful workloads in at least two clouds, and the control plane question is no longer optional.

What does "single control plane" actually mean here?

It means three things. Consistent policy authoring, consistent attestation format, and consistent rollup of posture and findings. The policy authoring is a single language the security team writes in once, with cloud-specific compilers that emit the per-cloud policy artefacts. The attestation format is in-toto plus SLSA, which all three clouds support natively. The rollup is a single dashboard and a single API that lets the security team query posture across clouds without picking three different tools.

The shape of the policy artefacts differs per cloud. AWS uses CodePipeline conditions, IAM policies, and Signer configurations. Azure uses DevOps approval checks, Key Vault policies, and ACR trust policies. GCP uses Cloud Build approvals, KMS bindings, and Binary Authorization policies. The control plane has to compile a single policy intent into all three, and detect drift when a human edits the per-cloud artefacts directly.

The compilation step is the part most homegrown control planes underestimate. A "policy" in the abstract has to map cleanly onto each cloud's primitives, and the mapping is non-trivial. A policy that says "every production deploy must be signed by an authorised pipeline" compiles to different IAM bindings, different attestor configurations, and different admission controllers in each cloud. The control plane has to handle the differences without leaking them up to the policy author.

How do I get a consistent attestation format across clouds?

Use in-toto attestations as the wire format and SLSA as the framework. AWS supports in-toto via Signer attestations and the Sigstore tooling. Azure supports it via notation extensibility and the standard Sigstore tooling. GCP supports it natively via Cloud Build's provenance generation. All three clouds can produce, consume, and verify the same in-toto attestation format, which means a single attestation can travel from a build in one cloud to a deploy target in another.

The cross-cloud attestation case is more common than people realize. A build in Cloud Build that publishes to Artifact Registry, with the artifact also mirrored to ECR, with the ECR artifact deployed to EKS, is a real pattern in many enterprises. The attestation that Cloud Build produces is a valid attestation for the EKS deploy, as long as the EKS admission controller trusts the GCP signing key. The control plane's job is to manage the trust relationships across clouds so the attestation flows freely.

The trust relationship management is where most homegrown control planes fail. The verification side has to know the public keys for every signing pipeline in every cloud, and the keys rotate. The control plane has to maintain a key directory, distribute keys to verification surfaces, and reconcile rotation events across clouds. This is a small but real piece of operational infrastructure.

What does the rollup layer need to handle?

It needs to ingest the audit logs from all three clouds, normalize the events into a common schema, and join them to produce a continuous posture signal that can answer "is this workload's supply chain healthy" regardless of where the workload runs. The normalization is the hard part; the per-cloud audit logs have different schemas, different latency, and different completeness.

CloudTrail on AWS is reasonably complete and reasonably timely. Azure Activity Log is similar. Google Cloud Audit Logs are similar. The ingest pipelines look the same shape across clouds — a stream from each cloud's native logging service into a central data store, with a normalization step that produces a common event format. The work is in the normalization, not the ingest.

The common event format has to capture the four primitives the control plane cares about: artifact production, attestation generation, policy evaluation, and deploy admission. Every event in every cloud falls into one of those four buckets, and the rollup queries are written against the common format rather than the per-cloud schemas.

How do I handle policy drift across clouds?

Continuous reconciliation. The control plane reads the current policy state in each cloud, diffs it against the intended state, and either auto-corrects or alerts depending on the criticality. The auto-correction path is appropriate for low-risk drift; the alert-only path is appropriate for high-risk drift that should be investigated before being fixed.

The drift detection has to handle the case where a human has legitimately edited the per-cloud artefact for a good reason. The control plane should not blindly revert legitimate human changes. The pattern that works is to record the human change, surface it in the policy authoring tool, and require an explicit decision to either accept the change or revert it. This keeps the human in the loop and prevents the control plane from being a source of frustration.

The high-risk drift cases are the ones to alert on without auto-correcting. A loosened admission policy, a removed signature requirement, a weakened key binding — these are the changes that an attacker would make to enable a supply chain attack, and they are also the changes that a stressed engineer might make to unblock a deploy. The alert lets the security team decide which case it is.

What about cross-cloud secrets and credential federation?

Cross-cloud federation has matured to the point where it is the default pattern for any pipeline that touches more than one cloud. AWS to Azure, AWS to GCP, Azure to GCP, and all the reverse directions are well-supported via OIDC federation. The control plane should require federation rather than long-lived credentials for any cross-cloud action, with no exceptions.

The federation trust policies are the artefacts the control plane has to manage. Each direction is a small piece of configuration in the source cloud's identity provider and the destination cloud's federation configuration. The configurations are static enough to live in IaC, and the control plane should both author them and detect drift.

How do I make the rollup useful for different audiences?

The security engineering team wants drill-down: which finding, which artefact, which pipeline, which deploy. The platform engineering team wants posture: which pipelines are out of policy, which clusters are out of policy, which exceptions are past their deadline. The compliance team wants evidence: which control was applied, when, by whom, and against what change. The control plane should serve all three audiences from the same data, with role-appropriate views.

The role-appropriate views are not just UI. They are also API surface. The security engineering team queries a different endpoint than the compliance team, with a different response shape, even though the underlying data is the same. The control plane that serves all three from a single API is the one that scales with the organization.

How Safeguard Helps

Safeguard is the rollup layer for multi-cloud supply chain control. It ingests audit logs, attestation state, and policy configuration from AWS, Azure, and GCP, normalizes the events into a common schema, and produces a single posture signal across clouds. The posture signal is the answer to "is the supply chain healthy" regardless of which cloud the workload runs in.

Griffin AI compiles a single policy intent into the per-cloud artefacts each cloud requires — IAM policies, Binary Authorization configurations, ACR trust policies, CodePipeline conditions, federated identity bindings — and opens pull requests against the customer's IaC repo to roll the artefacts out. The PR-driven workflow keeps a human in the loop for every change.

Safeguard's drift detection runs continuously across clouds, with auto-correction for low-risk drift and alerting for high-risk drift. The drift signal is the leading indicator of a supply chain attack in progress and of a misconfiguration that has weakened the control plane.

The TPRM module tracks the trust state of every external dependency consumed by every pipeline in every cloud, and the SBOM module attaches in-toto attestations that travel cleanly across clouds. The container self-healing module rewrites image references in deployed workloads automatically when a patched image is published anywhere in the multi-cloud estate, regardless of which cloud the build ran in or which cloud the deploy lives in.

For enterprise security teams, Safeguard is the single pane of glass that the multi-cloud control plane question requires, with a consistent policy authoring surface, a consistent attestation format, and a consistent rollup that does not leak the per-cloud differences up to the security team.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.