Reachability inside a single service is a tractable problem. Reachability across a monorepo with 40 packages or a microservices fleet with 200 services is a different engineering problem. Each has specific failure modes that defeat naive analysis. Each is solvable with patterns that have crystallised across customer deployments. The architectural choice between monorepo and microservices does not eliminate the reachability problem — it changes its shape, and platforms that handle one shape sometimes don't handle the other.
Monorepo-specific reachability
Three properties of monorepos that affect reachability:
- Shared internal packages. A vulnerability in
internal/authreaches every service that imports it. Reachability has to compute per-service exposure, not per-package. - Build-graph variance. Services have different builds. Reachability per service requires building the per-service graph correctly.
- Test-vs-production code separation. Vulnerabilities reachable only from test code should be classified differently from those reachable in production.
A monorepo-aware reachability platform handles all three. A naive one treats the monorepo as a single graph and produces over-broad findings.
Microservices-specific reachability
Three properties of microservices that affect reachability:
- Cross-service call graph. Service A calls Service B over HTTP; the call graph spans network boundaries.
- Asynchronous flows. Message queues, event streams, scheduled jobs — taint can flow without a direct call.
- Per-service deployment cadence. Services deploy independently; reachability per service must be current per service.
Cross-service reachability requires either: (a) treating each service in isolation and accepting incomplete coverage, or (b) modelling inter-service flows explicitly. Modern platforms increasingly do (b).
What inter-service modelling looks like
Three levels of fidelity:
Service registry awareness. The platform knows which services exist and how they relate. Cross-service findings are at least flagged.
Call-graph linking via HTTP/gRPC schemas. OpenAPI specs, gRPC protobufs, and similar are parsed to link calls across service boundaries.
Runtime observation augmentation. Traces from production tell the analyser which calls actually happen, which closes the gap that static analysis can't.
Most production deployments use level 2 with optional level 3.
Common failure modes
Three that show up in customer environments:
Incomplete cross-package edges in monorepos. A platform that doesn't understand the monorepo's build conventions misses internal-package edges. Findings come back as "package not used" when the package is heavily used by a different service.
Missing cross-service edges in microservices. A platform without service-registry integration doesn't see HTTP-bridged taint flows. A finding in Service A that propagates to Service B is invisible.
Test-code conflation. Reachability that doesn't distinguish test from production code over-counts findings that only matter in CI.
Each failure mode produces measurable noise. Each is fixable with the right architectural fit.
How to evaluate platform fit
Three concrete checks:
- Show the platform a real monorepo with 20+ packages. Does it produce per-service reachability or only per-package?
- Show it a 5-service microservices fleet with HTTP and message-queue links. Does it detect cross-service taint?
- Inject a test-only vulnerability. Is it correctly classified as test-only?
The answers separate platforms that handle modern architectures from platforms that pretend to.
How Safeguard Helps
Safeguard's reachability handles monorepos with build-aware per-service graph construction and microservices with service-registry integration plus optional runtime augmentation. Test/production code separation is configurable per tenant. For organisations whose architecture is real-world (not single-service), this matches the topology rather than fighting it.