Best Practices

Monolith to Microservices: Supply Chain Changes

What really happens to your software supply chain when you decompose a monolith into services, and how to avoid trading one risk for forty new ones.

Nayan Dey
Senior Security Engineer
7 min read

In March 2024 I joined a working group at a mid-sized insurance company that had been running a single Ruby on Rails monolith for eleven years. The application had 480 Gemfile entries. It had 2.1 million lines of Ruby. It deployed twice a week, on Tuesdays and Thursdays, from a single main branch, to a single fleet of EC2 instances. It also had, according to the SBOM I generated on my first day, 3,812 transitive dependencies.

The company had committed to a microservices migration the previous October, with a target of 40 services by the end of 2025. The engineering leadership team had read the usual articles about team autonomy, scaling bottlenecks, and deployment frequency. They had not spent as much time thinking about what happens to the supply chain when you go from one codebase to forty. That was the conversation I had been brought in to have.

The Counterintuitive Starting Observation

When you look at a monolith's dependency graph, you see a single tree with a clear root. There is one Gemfile.lock. There is one deployment. There is one set of credentials. When you scan for vulnerabilities, you get one report with a clear set of affected components.

When you look at a microservices dependency graph, you do not see forty small trees. You see something closer to a forest, except some trees share roots, many trees have copies of the same branches, and a surprising number of trees are secretly the same tree wearing different names. The dependency count does not divide by forty. It multiplies.

In this particular engagement, we built a proof of concept where we carved out the billing service first. The new billing service had 147 dependencies, which seemed reasonable. But those dependencies were not a clean subset of the monolith's 480. There were 32 gems the new service pulled in that the monolith had never used, because the extraction was also a rewrite and the new service used a different authentication library, a different database migration tool, and a different JSON serializer. Every new service was going to bring its own dependencies.

Across the full 40-service target, we projected a total unique dependency count of around 6,400. That is 65% more than the monolith's starting point, for the same business functionality. This is not a bug. It is a straightforward consequence of duplicating platform concerns across services. Nobody had told the VP of Engineering this number. When I did, the migration timeline stretched by a quarter.

The Duplication Tax

The single biggest supply chain change from a monolith to microservices is duplication. The monolith used one JSON library. The microservices used three, because Team A standardized on oj, Team B preferred Yajl for historical reasons, and Team C was on a newer service built in Go and used the standard library. When a vulnerability hit one of these, only one third of the services were affected -- which sounds good, until you realize the other two thirds had their own unpatched CVEs in their own JSON libraries.

The monolith had a central HTTP client. The microservices had five different HTTP clients across the fleet, including two different Ruby gems, the Go standard library, Python's requests, and a hand-rolled Rust implementation in one service that should not have existed.

Every duplication creates three problems. You have more things to patch. You have more things to monitor. And you have more ways the same vulnerability can hide -- because an advisory for "Faraday" does not match against "HTTParty" even if both are affected by the same underlying protocol flaw.

The way to contain this is through platform engineering. We established a rule that every service must use an approved library from a list of about fifty, one per concern. The list was short enough to keep patched, and broad enough that teams could actually build. Services that wanted to introduce a new library had to file an exception, which required a security review and a commitment to maintain it. This sounds bureaucratic. It is. It is also the only way to prevent the dependency count from exploding.

Transit Security and the New Attack Surface

Monoliths talk to themselves through method calls. Microservices talk to each other over the network. Every network call is a new piece of attack surface. Every service-to-service authentication decision is a new thing to get wrong.

The billing service we carved out first needed to talk to the customer service, the policy service, the payments service, and the notifications service. That is four authentication flows, four authorization models, four sets of mutual TLS certificates, and four sets of timeouts to configure. Multiply by 40 services with an average of 6 peers and you have 240 inter-service connections, each of which is its own supply chain in a sense -- because each depends on the availability and trustworthiness of the peer.

The working group decided to adopt a service mesh on day one, specifically to avoid having every service team solve this problem from scratch. We chose mutual TLS by default, handled by the mesh, with identity issued from a central SPIFFE-compatible identity provider. Authorization went into a centralized OPA policy layer. This meant teams did not get to "just call the other service." They had to declare the intent, and the mesh enforced it.

This sounds like a lot of platform investment before writing a line of service code. It is. It is also non-negotiable. The teams that skip this step end up with hard-coded API keys in environment variables, mutual trust based on "this is my network," and no way to audit who is actually talking to whom.

SBOM Strategy Has to Change

The monolith's SBOM was a single document. It got generated weekly, reviewed, and filed. The microservices environment needs SBOMs per service, per version, stitched together into a product-level SBOM that represents the deployed system as a whole.

We ended up producing three levels of SBOM. The first was per-service, generated at build time, stored alongside the container image. The second was per-environment, assembled nightly from the actual running service versions in production. The third was per-customer-facing-product, which represents what a regulator or customer would actually want to see when they ask "what is in the thing you are selling me."

The per-environment SBOM is the one that ended up mattering most. It is the only document that answers the question "right now, in production, what versions of what are running." In a monolith this question is trivial. In a microservices world, with 40 services deploying independently, it is genuinely hard -- and it is exactly the question you need to answer when a new CVE drops and someone asks whether you are affected.

How Safeguard Helps

Safeguard gives teams migrating from monolith to microservices a unified supply chain view across dozens or hundreds of service repositories. Per-service SBOMs roll up into environment-level and product-level SBOMs automatically, so you can answer "what is running in production" without manually stitching together forty artifacts. Duplicate dependency detection flags where teams have chosen different libraries for the same concern, making the platform engineering conversation data-driven rather than opinion-driven. Cross-service vulnerability impact analysis shows which services are affected by an advisory even when they use different library names for the same underlying component. For organizations decomposing legacy systems, Safeguard is the supply chain backbone that keeps the migration from becoming forty uncorrelated risk surfaces.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.