Confidential computing on Azure crossed an inflection in the last eighteen months. The technology — AMD SEV-SNP and Intel TDX in production, with broader hardware availability through 2025 and into 2026 — is no longer a research demo. Microsoft's confidential VM offerings now span general-purpose, memory-optimized, and GPU-accelerated SKUs, and the Microsoft Azure Attestation service has matured into a usable building block rather than a niche curiosity. The catch is that the value of confidential VMs is not "we ran the workload in a black box." The value is in what you do with the attestation report the hardware produces. A confidential VM running unattested code is a marketing slide. A confidential VM whose attestation is verified by a relying party before any secret is released or any code is permitted to run is a defensible control. For supply chain pipelines, that distinction is the entire point.
What does the attestation report actually contain?
An attestation report from a confidential VM is a signed blob produced by the platform security processor on the host. On AMD SEV-SNP, it includes the guest's measurement (a hash of the initial memory contents and boot configuration), a launch policy, the version of the firmware and microcode, an optional report data field (62 bytes the guest can use to bind the report to a session), and a signature chain that terminates in an AMD-issued root certificate. On Intel TDX, the analogous structure is the TDX quote, signed by an Intel-managed PCS quoting enclave. The report does not say "this is a trusted workload." It says "this is the measurement of what booted on this hardware, signed by hardware whose authenticity you can verify against the vendor's root of trust." A relying party then decides whether that measurement matches a known-good value. The supply chain question is: how do you know what the known-good value is?
Where does the known-good measurement come from?
The clean answer is reproducible builds. The image that boots inside the confidential VM is built from source by a verifiable pipeline that produces both the bootable artifact and the expected attestation measurement. The measurement is published — ideally signed and stored in a transparency log — and the relying party fetches it before validating any attestation report. The dirtier reality is that most workloads boot from images produced by build systems that do not produce deterministic measurements. The pragmatic compromise is to compute the measurement once at release time on a known-good builder, sign that measurement value, and store it as an attestation predicate alongside the image. Anyone verifying an attestation report against a deployed instance checks both that the report is genuine and that the measurement in the report matches the signed expected value for the image they expected to be running.
How does this connect to secret release?
The most common production pattern for confidential VMs is sealed secret release: a Key Vault is configured with a release policy that requires a valid attestation report meeting specific criteria (measurement value, security version, debugging disabled) before it will release a wrapped key. The confidential VM boots, generates an attestation report, presents it to the Microsoft Azure Attestation service which validates against the hardware vendor's root, receives an attestation token, and presents that token to Key Vault. Key Vault checks the token against its release policy and, if it matches, returns the secret. The secret never exists in plaintext outside the confidential VM. For a supply chain pipeline, this means the keys that sign artifacts can be released only to a verified, measured signing environment — eliminating the risk that a compromised build agent reads a signing key.
{
"version": "1.0.0",
"anyOf": [
{
"authority": "https://sharedeus.eus.attest.azure.net",
"allOf": [
{"claim": "x-ms-isolation-tee.x-ms-attestation-type", "equals": "sevsnpvm"},
{"claim": "x-ms-isolation-tee.x-ms-compliance-status", "equals": "azure-compliant-cvm"},
{"claim": "x-ms-isolation-tee.x-ms-sevsnpvm-launchmeasurement", "equals": "EXPECTED_MEASUREMENT_HEX"}
]
}
]
}
What attacks does this not stop?
A confidential VM with valid attestation prevents the host operator (including Microsoft) from reading the workload's memory, and the attestation report prevents a relying party from accepting a workload running unexpected code. It does not prevent vulnerabilities in the workload itself. A confidential signing environment that runs a Node.js process with a known prototype pollution vulnerability is still vulnerable to attacks against that process, just hidden from the host. It also does not prevent side-channel attacks at the level the threat model excludes; AMD SEV-SNP and Intel TDX have specific defensive scopes documented by the vendors, and attacks outside those scopes (some power analysis attacks, some microarchitectural attacks under specific conditions) remain in scope for the workload itself. The defender's mental model should be: confidential VMs reduce the trusted computing base by eliminating the host from it, and attestation gives you proof of what is in the remaining TCB. Everything else still requires the normal hardening playbook.
How should the pipeline actually be wired?
A working pipeline pattern in 2026 looks like this. The build job runs in a non-confidential environment, produces the bootable image, computes the expected measurement, signs the measurement with the team's release key, and publishes both the image and the signed measurement attestation to the registry. The deploy job provisions a confidential VM from the image and waits for it to boot. A separate attestation-verification job retrieves the attestation report from the running VM, validates it against the hardware vendor's root and the Microsoft Azure Attestation service, compares the measurement claim to the signed expected value, and emits a "verified" status to the deployment system. Production traffic is routed to the instance only after the verified status is recorded. Secret release policies on Key Vault require the same attestation token, so the workload cannot reach signing keys or production secrets without passing verification first. Any failure in verification triggers automatic instance replacement and an alert.
What are the most common rollout mistakes?
Three recurring patterns. The first is treating confidential VM provisioning as the security boundary and skipping attestation entirely. This leaves you with an expensive VM SKU and no actual control benefit; the protections against host read are real but useless if no relying party ever checks the report. The second is hardcoding a single expected measurement and never updating it through patch cycles. The measurement changes when firmware, kernel, or boot configuration changes, so a static expected value forces you to either skip patches (bad) or break verification (worse). Build the measurement update into the release pipeline as a first-class artifact. The third is using debug mode in production. Confidential VMs in debug mode produce attestation reports that flag the debug bit; release policies must reject debug-mode reports, and your testing should confirm that they do.
How Safeguard Helps
Safeguard tracks every confidential VM deployment across your Azure tenants, the expected measurements registered for each image, and the actual attestation reports observed at runtime — surfacing drift, debug-mode instances, and workloads running without verified attestation. Policy gates block infrastructure-as-code changes that disable attestation requirements, weaken Key Vault release policies, or deploy non-confidential SKUs into compartments that require attestation-gated signing. Griffin AI binds attestation evidence to the SBOMs and signatures of the workloads running inside, so when an auditor or incident responder asks "what code is actually executing in our confidential signing environment," the answer is a verified attestation report linked to the source repository, the build commit, the SBOM, and every artifact ever signed by that environment.