Confidential computing moved from platform curiosity to viable supply-chain control between 2022 and 2024. Intel TDX shipped broadly with Sapphire Rapids (4th Gen Xeon) in Q1 2023 and is generally available on Azure DCesv5 and Google Cloud C3 TDX instances. AMD SEV-SNP shipped with Milan and Genoa CPUs and is available on Azure DCasv5/ECasv5 and AWS m6a/m7a variants (where supported by the hypervisor). AWS Nitro Enclaves, conceptually adjacent but architecturally distinct, provide a VM-level enclave using the Nitro hypervisor isolation. For supply-chain use, the interesting question is not "can we boot a confidential VM" but "can we prove that a given artifact was produced on a specific hardware-attested workload," and what that buys you. This post walks through attestation flows, practical integration points with Tekton, GitHub Actions, and Sigstore, and where confidential computing genuinely changes the threat model versus being theater.
What attestation does each platform actually produce?
Each emits a signed quote or report binding measurements to the CPU's key hierarchy. Intel TDX produces a TDREPORT containing MRTD (TD measurement register), RTMRs (runtime measurement registers 0-3), and the TD config, signed by the Intel PCS (Provisioning Certification Service) chain rooted at an Intel-owned root CA. AMD SEV-SNP produces an attestation report from the PSP (Platform Security Processor) containing MEASUREMENT, HOST_DATA, and REPORT_DATA, signed by a chain rooted at the AMD ARK (AMD Root Key). AWS Nitro Enclaves produce an attestation document signed by Nitro's hardware-rooted key chain, with PCR0 through PCR8 measurements covering the enclave image, kernel, and ramdisk. In all cases the verifier checks the signature chain against a manufacturer-rooted trust store and evaluates the measurements against an expected allowlist. The CNCF Confidential Containers project (CoCo) 0.10 standardizes this across TDX and SNP via the attestation-agent and KBS.
How does this plug into a build pipeline?
By making the builder itself an attested workload and binding the signing key to the attestation. A concrete flow: Tekton 0.62 runs a TaskRun inside a TDX guest on Azure DCesv5. The Tekton chains controller produces an in-toto 1.0 provenance statement for the built artifact. The signing operation uses an HSM or cloud KMS where the release policy attests that the caller is a TDX workload with a specific MRTD. Azure Managed HSM's TEE-bound keys and AWS KMS's key-policy attestation conditions for Nitro (using kms:RecipientAttestation:ImageSha384) both support this. The artifact's signature is then pushed to Sigstore's Rekor transparency log (Rekor 1.4+), and the SLSA provenance references the attestation quote hash. Downstream verifiers can demand a provenance chain whose signing step occurred inside an attested TEE.
What specific supply-chain threats does this actually mitigate?
Runtime compromise of the build host, stolen signing credentials from memory, and insider access to build infrastructure. A classic runner compromise like the 2023 CircleCI incident (where an engineer's laptop token was exfiltrated and used to access customer secrets) looks different when the signing key is non-exportable and bound to a TEE: even with full root on the builder host, the signing key cannot be moved to a new machine because the KMS refuses to release it to an unattested workload. TDX and SEV-SNP specifically remove the hypervisor and VMM from the TCB for guest memory, which is the threat that cloud tenants cannot otherwise defend against. They do not defend against bugs in the guest OS, compromised guest code, or supply-chain attacks on the container image itself, which is why attestation should be combined with reproducible builds and strong input provenance.
Where does it not help, and what are the operational costs?
It does not help when the workload inside the TEE is itself buggy, when input artifacts are untrusted, or when the attestation policy is too permissive. Cold-start costs are real: a TDX VM on Azure DCesv5 takes roughly 25 to 60 seconds longer to boot than an equivalent non-confidential instance because of memory acceptance (lazy or eager) and attestation handshake. Memory throughput is slightly lower due to memory encryption (AES-128-XTS for SEV-SNP, AES-128-XTS with integrity in TDX); benchmarked overhead ranges from 2 to 7 percent on workloads we measured. TDX and SEV-SNP have had their share of CVEs: CVE-2023-20592 (INCEPTION / SRSO affecting AMD Zen 3/4), CVE-2024-45332 (TDX module pre-1.5 attestation issue), and the ÆPIC Leak in 2022 affecting SGX. Treat TEEs as one layer, not a total substitute for defense in depth.
How do you integrate this with Sigstore today?
Sigstore fulcio 1.6 can issue short-lived code-signing certificates to workload identities, and cosign 2.4 can include attestation evidence as an in-toto predicate. A practical pattern: the build step obtains an OIDC token from the cluster's identity provider (for example, GitHub Actions' OIDC or Kubernetes' ProjectedServiceAccountToken), exchanges it with fulcio for a short-lived cert, signs the artifact, and publishes both the signature and the TEE quote as a separate attestation under the predicate type https://example.com/tee-attestation/v1. Verification policies written with cosign verify-attestation --policy policy.rego can then require both a valid fulcio signature and a matching TEE quote. Kyverno 1.12 policies at admission can enforce this before the artifact runs in production.
How Safeguard Helps
Safeguard treats TEE attestation evidence as first-class provenance: when a build step publishes an in-toto attestation referencing a TDX or SEV-SNP quote, Safeguard verifies the quote chain against the manufacturer's PCS or ARK anchors and records the measurements in the artifact's provenance record. Policy gates can require TEE-backed provenance for releases targeting regulated environments, using predicates from SLSA v1.0 and CSAF VEX for exception handling. The audit trail captures which measurement hashes were considered valid at decision time, so rotations and exceptions are reviewable. This keeps confidential computing tied to artifact decisions, not just infrastructure claims.