Multi-arch images — a single image tag that resolves to different binary contents depending on whether the puller is on x86 or ARM — are increasingly normal in 2026. The shift to Graviton on AWS, to Ampere on the public cloud generally, and to Apple Silicon for developer laptops means most platform teams ship multi-arch or are planning to.
Every supply chain control built for single-arch images has subtle failure modes in multi-arch. Signing can sign the wrong thing. SBOMs can describe a different binary than is actually running. Attestations can fail to verify on one architecture and succeed on another. This is the set of pitfalls to plan around if your pipeline is going multi-arch or is already there.
What is actually inside a multi-arch image manifest?
An image manifest list (the OCI term) is a top-level JSON document that lists several platform-specific image manifests by digest. When you docker pull the top-level tag, the client picks the right platform manifest for its architecture and pulls that. Each platform manifest has its own layers, its own config, and its own digest.
The immediate implication: a multi-arch "image" is really multiple images with a common index. If you sign "the image" without being careful, you might be signing the index, the platform manifests, or both, and your verification logic needs to match.
Cosign handles this correctly now — cosign sign on a multi-arch tag signs the index by default, and cosign verify checks signatures on the index. But older tooling, older docs, and community blog posts often describe a workflow that signs individual platform manifests, which breaks in various ways when downstream tools look at the index. If you are copying a pipeline from a 2022 blog post, check what is actually being signed.
The other implication: SBOMs need to be attached at the right level. An SBOM of the ARM variant is different from an SBOM of the x86 variant, because the layer contents differ. A single SBOM attached to the index is incorrect — it can only accurately describe one architecture at best. The right pattern is an SBOM per platform manifest, with the index tying them together.
Where does buildx go wrong?
buildx with the --platform linux/amd64,linux/arm64 flag produces a correct multi-arch image. It does not produce correct attestations by default in every version, and the behavior has shifted across buildx releases.
Versions of buildx from the 0.11 series produced provenance attestations that were attached at the index level without per-platform differentiation. Verifying a platform-specific attestation required the verifier to do extra work. Later versions fixed this and produced per-platform attestations with a sane shape. If your CI is pinned to an older buildx, you may have attestations that look valid but do not match what most verifiers expect.
The related issue is SBOM generation with buildx's --attest=type=sbom flag. The generated SBOMs are sometimes for the build cache rather than the final image, depending on the builder backend. They are usually correct but occasionally off, and the error surface is confusing because everything succeeds.
The pattern I trust more than relying on buildx attestations: build the multi-arch image with buildx, then run syft against each platform manifest separately, and attach the resulting SBOMs as signed in-toto attestations with explicit platform subject references. This is more steps but the output is unambiguous.
How does cross-compilation break SBOM accuracy?
Cross-compiling an ARM binary on an x86 build machine produces correct binaries and confusing SBOMs. Many SBOM generators introspect the build environment rather than the output artifact, and the build environment is the x86 machine, not the ARM target. The result is an SBOM for the ARM image that lists x86 packages.
The scanners that do this — Syft is careful to avoid it, some others are not — produce SBOMs that pass validation and are deeply wrong. An auditor comparing the SBOM to the actual image contents will find the mismatch. A CVE scanner running against the SBOM will miss ARM-specific vulnerabilities and report x86-specific ones that do not apply.
The way to catch this is to run the SBOM generator against the final image artifact, not against the build environment. Syft against a pulled image manifest extracts the actual packages in that image. Syft against a running build container extracts whatever packages are in the build container, which is usually not what you want.
A related issue is QEMU-emulated builds on buildx. When you ask buildx on an x86 host to build for ARM, it uses QEMU user-mode emulation. The resulting binaries are ARM. The build process runs x86 tools inside an emulated ARM userland. Package metadata can get mangled in ways that depend on the tool — Python wheels and Go binaries usually survive, native C extensions sometimes do not. Check the resulting images on actual ARM hardware before you commit to the pipeline.
What about attestation verification on a puller?
The puller verifies the attestation against the digest it actually pulled, which is a platform-specific manifest, not the index. The attestation must either be attached to that specific platform manifest or the verification logic has to walk up to the index and check the index-level attestation.
Cosign supports both modes and most admission controllers use the walk-up mode. In practice, the path that works is: sign and attest at the index level, and have the verifier check the index signature whenever it pulls any child manifest. This is what Kyverno and the Sigstore Policy Controller do by default as of late 2025.
The failure mode is registries or verifiers that do not implement the index-walk correctly. They verify the platform manifest and find no signature attached (because the signature is on the index) and reject. If you hit this, it is almost always a verifier version issue. Upgrade. If you cannot upgrade, attach platform-specific signatures as well as index-level signatures. It is wasteful but it works.
The other failure mode is Docker Scout, Grype, and other scanners that fetch attestations. They sometimes fetch the index-level attestation and interpret it as applicable to the platform being scanned, which can produce false negatives if the attestation is specifically for a different platform. Verify the tool you use does the right thing in multi-arch mode before you trust its output.
How do you handle a partial compromise across architectures?
Treat it as a compromise of all architectures unless you have specific evidence otherwise, because most supply chain attacks that compromise the build pipeline compromise all of its outputs regardless of platform.
The scenario: your build runs a compromised base image that was only built for amd64, and the arm64 build pulls a different (clean) base image. The amd64 image is compromised. The arm64 image is not. Do you pull only the amd64 variant or rebuild everything?
Rebuild everything. The effort of confirming that the arm64 path is actually clean and that the compromise did not affect the shared CI machinery is usually greater than the effort of rebuilding both variants. Supply chain compromises are rarely surgical — they tend to affect shared infrastructure in ways that are not obvious at disclosure time. Rebuild, re-sign, re-attest, re-deploy.
The operational prerequisite is that your pipeline can rebuild both architectures from source on demand. If one of your architectures has accumulated a Dockerfile.arm64 that hand-copies artifacts from an S3 bucket populated by a manual build someone did in 2023, you cannot rebuild. Fix that before you need to.
What about images that really should be single-arch?
Not everything needs to be multi-arch. Build tools that only run on x86 because they depend on x86-specific instruction sets do not need an ARM variant. System utilities with no ARM use case in your environment do not need one. Multi-arch is not free — builds are slower, attestation is more complex, and storage in the registry is higher.
Decide per-workload. Application images that will run on heterogeneous clusters: multi-arch. Build-time utilities that run only in your CI (which you control): single-arch. Developer tools that run on Apple Silicon laptops: multi-arch (amd64 plus arm64 at minimum).
How Safeguard.sh Helps
Safeguard.sh generates and verifies per-platform SBOMs for multi-arch images, so reachability analysis operates on the actual binaries running on each architecture rather than a blurred composite — this is where the 60 to 80 percent alert-noise reduction comes from on multi-arch pipelines. Griffin AI reviews your buildx and attestation configuration and highlights the common pitfalls (wrong attestation level, QEMU-emulated SBOM drift, missing per-platform signatures) using SBOMs generated with 100-level dependency depth. TPRM tracks third-party image suppliers across the architectures you deploy on, catching cases where an arm64 variant is maintained by a different signer than the amd64 variant, and container self-healing rebuilds all architectures together when upstream patches land, keeping your multi-arch surface coherent rather than eventually-consistent.