Cloud Security

Multi-Cloud Software Supply Chain Abstractions

Running supply chain controls across AWS, Azure, and GCP means picking the right abstractions. Here is which ones hold up and which ones you will regret.

Shadab Khan
Security Engineer
7 min read

Multi-cloud is a word that regulators love and that platform engineers live with reluctantly. Whatever your team's stated posture is, the reality in 2026 is that most organizations of meaningful size have workloads in at least two of AWS, Azure, and GCP, usually because of an acquisition or a regulated-workload carve-out. The question is not whether to be multi-cloud. The question is where to put the supply chain abstractions so that you can change the implementation on each cloud without rewriting the policy.

This is the design guide I wish existed when I was first tasked with making supply chain controls coherent across three clouds. It covers the abstractions that hold up, the ones that leak, and the places where pretending the clouds are the same will eventually bite you.

Why do most multi-cloud supply chain abstractions leak?

Because the primitives are genuinely different under the hood, and the abstractions pretend they are not. Identity federation is the clearest example. AWS IAM, Azure Entra, and GCP IAM all support OIDC federation, but the trust models, token claim names, and authorization semantics are different in ways that affect security. An abstraction layer that hides these differences makes the happy path easy and the failure mode invisible.

The second reason is that cloud providers add features at different paces. AWS shipped ECR pull-through caches before Azure had a comparable ACR feature. GCP shipped Artifact Analysis in a more usable form than AWS Inspector until recently. An abstraction that standardizes on the least common denominator loses features; one that targets the most capable provider is painful to implement everywhere else. There is no universally correct answer, and the wrong answer is to pretend the tension does not exist.

The third reason is tooling. A pipeline that works identically on all three clouds usually means you have rewritten it in something like Bazel or Earthly with cloud-specific adapters, which is an engineering investment most orgs cannot sustain. The honest version is that the build pipeline is cloud-native, and the abstraction is above it, at the artifact and policy layer.

Which abstractions actually hold up across clouds?

Three: SBOM format, signing format, and policy-as-code. The rest — identity, networking, scanner output, registry semantics — leak in ways that will eventually require per-cloud handling.

SBOMs work. CycloneDX and SPDX are both widely supported, the tooling is mature, and the abstractions are meaningful. A CycloneDX SBOM generated from a container image has the same fields and the same semantics regardless of where the image was built. Normalize on one format (I prefer CycloneDX for ease of use, SPDX for compliance-heavy environments) and insist on it across all clouds.

Signing formats are the next abstraction that holds. Notation and Cosign are both cross-cloud in the sense that the signature artifact is portable between registries. The keys and signing identities are per-cloud, but the signature format, the trust policy schema, and the verification semantics are consistent. You can sign an image in Azure and verify it in AWS, as long as the trust stores are kept in sync.

Policy-as-code is the third. OPA, Kyverno, and Conftest all work across cloud providers, because the policy is about the shape of resources and artifacts, not about the cloud semantics behind them. A Kyverno policy that requires every image to have a signature and an SBOM works the same on EKS, AKS, and GKE. The cloud-specific bits — how to fetch the SBOM, where the signature lives — are adapters, and the policy itself is portable.

Which abstractions should I not try to unify?

Identity federation, network egress control, and secrets management. Attempting to abstract these typically results in an in-house tool that is worse than any of the cloud-native options it replaces.

Identity federation is worth doing in each cloud natively. AWS uses IAM Roles Anywhere and OIDC federation against STS. Azure uses workload identity federation against Entra. GCP uses Workload Identity Federation with attribute conditions. The ergonomics are similar at a high level, but the trust policies, the claim naming, and the failure modes are different enough that a unified abstraction either exposes those differences (and is not really an abstraction) or hides them (and introduces a new trust model that is not the one your auditors will ask about).

Network egress control is also cloud-native. AWS has Transit Gateway and Network Firewall. Azure has Virtual WAN and Azure Firewall. GCP has Cloud NGFW and Cloud Service Mesh. Each has a different mental model. Forcing them into a common abstraction loses both functionality and visibility. What you can standardize is the egress policy — a list of domains and CIDRs that are allowed for each build class — and apply it in each cloud using the native tool.

Secrets management is the third. AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager have roughly equivalent features, but the access patterns are different, and the integrations with the rest of each cloud's services are much better when you use the native option. Do not put a meta-secrets-manager in front of them unless you have a truly compelling reason.

How do I design the policy layer so it survives cloud change?

Put the policy in one place, keep the policy in Git, and render cloud-specific resources from the policy using a code generator. The cloud-specific resources — IAM policies, image registry configurations, admission controller deployments — are build artifacts, not the source of truth.

The pattern that works is:

  1. A policy repository that contains the organizational supply chain rules in a single canonical form (I prefer a Rego module with tests, but a structured YAML is fine too).
  2. Per-cloud renderers that translate the canonical policy into cloud-specific configuration.
  3. CI that validates the rendered artifacts before applying them.
  4. Drift detection that reads back the deployed configuration and compares it to the rendered form.

This lets you make a policy change in one place and have it propagate to all three clouds. The alternative — editing each cloud's configuration independently — leads to divergence within a quarter, and divergence is where supply chain controls break down.

What about cross-cloud artifact promotion?

Use a vendor-neutral intermediate registry for artifacts that need to move between clouds, sign in the source, verify in the destination, and never re-sign in the destination. A common pattern is a Harbor instance or a public OCI registry that acts as the canonical source; each cloud's native registry replicates from it and runs its verification there.

The temptation to re-sign during replication is strong and wrong. Re-signing creates an asymmetry where the destination's signer vouches for content the destination did not actually verify; it is a trust laundering pattern. The correct pattern is a chain of signatures: the original signer signs at build time, optional replication-attester signatures are added if needed for regional compliance, and the verification at deploy time checks the full chain.

How Safeguard.sh Helps

Safeguard.sh applies reachability analysis across all three clouds in a single graph, so findings from AWS Inspector, Azure Defender, and GCP Artifact Analysis are normalized and filtered against whether the vulnerable code is actually called — reachability cuts 60 to 80 percent of multi-cloud noise that would otherwise triple your triage queue. Griffin AI generates cloud-specific renderers from a single canonical supply chain policy and detects drift when per-cloud configuration diverges from the Git source. Safeguard's SBOM module normalizes SBOMs across CycloneDX and SPDX formats at 100-level dependency depth, its TPRM module tracks every external vendor touching workloads in any of the three clouds, and container self-healing rewrites image references across EKS, AKS, and GKE simultaneously when a patched image is published and verified against the unified trust policy.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.