Best Practices

On-Prem to Cloud Supply Chain Continuity

A year inside a financial services cloud migration, and how to keep your software supply chain intact when everything else about the environment changes.

Shadab Khan
Senior Security Engineer
7 min read

The first conversation I had with the client, in February 2024, was about a yum repository. They had been running a private yum mirror in their primary data center since 2014. It was the source of truth for every RPM installed on every Linux server in their infrastructure. It had been air-gapped for the last six years for compliance reasons. Packages got in through a signed, reviewed, manual process. Packages got out onto servers through an equally reviewed, equally manual process. The chain of custody was unbroken, documented, and audited once a quarter by an external firm.

They were about to move to AWS. The question on the table was simple to ask and hard to answer: what does "the same supply chain" mean when the substrate underneath it changes from your own physical hardware to someone else's virtualized infrastructure? This post is about what we learned over the next eight months answering that question.

The Illusion of "Lift and Shift"

The client had been told by multiple consultants that the migration could be a "lift and shift." They meant it in the marketing sense -- move your VMs to EC2 with minimal changes. From a supply chain perspective, a true lift and shift is impossible. The moment you stop installing packages from a machine in a rack you own and start installing them from a machine in a rack someone else owns, you have made a supply chain decision. It might be a good one. It is not a non-decision.

Consider what changes in a lift and shift. Your package repository might still be "yours," but it now runs on EC2 storage that Amazon can theoretically access. Your image build process still runs your scripts, but the VM images they produce are now AMIs sitting in an AWS-managed registry. Your network boundary is now a VPC with rules maintained by someone from your ops team in a web console instead of a firewall team with a change board. Every one of these differences is a potential gap in the supply chain story you tell auditors.

We spent the first six weeks mapping the existing supply chain in painful detail. We documented every place a package, a binary, or a configuration entered the environment. For each entry point, we wrote down how trust was established today -- who signed it, who reviewed it, who approved it -- and how trust would be established in the cloud-native equivalent. The document was 63 pages. About 20 of those pages described controls that had no direct cloud equivalent and would have to be rebuilt from primitives.

Signing Keys and the Continuity Problem

The hardest single problem in the migration was signing keys. The on-prem environment used a hardware security module in the primary data center to sign every internal package. The private key had never left the HSM. It had signed approximately 180,000 packages over a decade. Every server in the environment trusted this key and no other.

When you move to the cloud, you have options for where the signing key lives. You can use AWS KMS with a customer-managed key. You can use AWS CloudHSM, which gives you a dedicated HSM. You can leave the key in your on-prem HSM and access it from the cloud over a VPN. Each of these changes the trust model in ways that matter to regulators.

We ended up using CloudHSM, migrated the signing key in a ceremony that took eight hours, including an auditor on video call the entire time. The old HSM was kept online and read-only for a year afterward so that old signatures could still be verified. The new HSM was configured with a different key identifier but the same trust chain. We wrote tooling to re-sign the most critical 200 packages with both keys during the transition, so that systems could verify either signature during the cutover window.

This is the kind of detail that nobody in the cloud vendor marketing deck prepares you for. It is also the kind of detail that determines whether your migration is actually compliant with the regulations you were compliant with last year.

SBOM History Is a Compliance Artifact

The client had been generating SBOMs for their application releases since 2021, stored on a NAS in the data center. The SBOM archive represented three years of provenance for production deployments. During an audit, the auditors had the ability to request the SBOM for any historical release going back to the policy's inception.

When you move to the cloud, you have to decide what happens to this archive. Option one is to copy it. Option two is to leave it where it is and keep the NAS online for reference. Option three, which I have seen organizations do and then regret, is to declare that the old archive is "legacy" and focus on the cloud-generated SBOMs going forward. Option three is what fails audits.

We did option one, but it was not a simple copy. The SBOMs had been generated with a tool that emitted a custom XML format. Moving to the cloud was the moment to normalize everything to CycloneDX. We wrote a converter, ran it across the three-year archive, verified the output against the originals on a sample of 500 releases, and put both versions into an S3 bucket with object lock enabled for the regulatory retention period. The cost of the S3 storage for 11 million SBOM files was about $180 per month. That is a rounding error against the cost of failing a SOX audit.

Network-Sourced Dependencies Become More Dangerous

On-prem, the application builds had only one source of packages: the internal yum mirror and the internal Maven proxy. Both were air-gapped. The blast radius of a compromised upstream was limited by the manual review process for getting anything into the internal mirror.

In the cloud, there is a temptation to let build pipelines pull from public registries directly. It is faster. It is simpler. It is also a dramatic expansion of the attack surface. We spent a lot of time convincing the platform team that the internal mirror concept needed to move with them, not be abandoned.

We ended up with a cloud-native equivalent: Artifactory deployed in the VPC, configured with the same approval workflows as the on-prem mirror, but with a more modern interface and better integration with the rest of the stack. Every build was configured to only be able to reach Artifactory, enforced by VPC egress rules. A build that tried to reach pypi.org directly would fail at the network layer. This was a deliberate choice to preserve the air-gap property -- even at the cost of making first-time package additions slower than most teams wanted.

Provenance Chains That Survive the Move

The final big concern was provenance continuity for running workloads. A server in the data center had a clear provenance chain: it was built from a gold image, the gold image was built from base packages, the base packages came from the internal mirror, the internal mirror packages were signed by a known key. Every step was recorded in a CMDB.

The cloud equivalent had to preserve this chain. We used AMI tags and AWS Systems Manager inventory to record the same data, with automation that pulled the SBOM, signing attestation, and build pipeline reference for every AMI launch. The CMDB kept its role as the source of truth, updated from cloud events via EventBridge.

It was more moving parts. It was also functionally equivalent to what had existed before. That equivalence was what let the external auditors sign off on the migration without requiring a full re-certification of the compliance posture.

How Safeguard Helps

Safeguard is designed for organizations whose supply chain spans both on-premises and cloud environments during a multi-year migration. The platform ingests SBOMs from legacy build systems and cloud-native pipelines into a single inventory, so auditors and security teams can reason about the entire estate without switching tools. Historical SBOM archives can be normalized to CycloneDX and retained with cryptographic integrity, preserving the audit trail across the migration boundary. Policy gates enforce consistent signing, provenance, and mirror-origin requirements regardless of whether a workload runs in a datacenter or an AWS account. For regulated industries facing multi-year cloud transitions, Safeguard is the continuity layer that keeps the supply chain story coherent from one environment to the next.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.