A monolith has one SBOM. A microservices architecture has dozens or hundreds, each changing at its own pace, maintained by different teams, deployed on different schedules. The SBOM problem in microservices is not generation -- it's aggregation, correlation, and making the data actionable when a CVE drops at 2 AM.
The Scale Problem
Consider a mid-size microservices deployment: 60 services, each with its own container image, its own dependency tree, its own release cadence. That's 60 SBOMs to generate, store, and monitor. When a service deploys three times a week, you're producing 180 SBOMs per week just for that service.
Now multiply across 60 services with different deployment frequencies. You're looking at thousands of SBOM versions per month. The question "do we use log4j?" becomes "do any of our 60 services, across any of their currently-deployed versions, include log4j?"
That question needs to be answerable in minutes, not hours.
Per-Service SBOM Generation
Each microservice in your architecture should generate an SBOM as part of its CI/CD pipeline. The pattern is straightforward:
# Generic CI step for any microservice
generate-sbom:
stage: build
script:
- docker build -t $SERVICE_NAME:$VERSION .
- syft $SERVICE_NAME:$VERSION -o cyclonedx-json > sbom.json
- curl -X POST https://sbom-api.internal/upload \
-H "Authorization: Bearer $TOKEN" \
-F "service=$SERVICE_NAME" \
-F "version=$VERSION" \
-F "environment=$DEPLOY_ENV" \
-F "file=@sbom.json"
Key decisions at the service level:
When to generate: At build time, after the container image is finalized. Not from source code alone (misses OS packages in the base image).
What format: Pick one and standardize. CycloneDX JSON is the most widely supported for tooling integration.
What to include: Both application dependencies and OS packages from the base image. A Spring Boot service on eclipse-temurin:17-jre-alpine includes both Java JARs and Alpine APK packages.
Shared Base Images
Most microservice architectures standardize on a few base images. If your organization uses a custom base image, generate an SBOM for the base image itself and track it separately:
# Base image SBOM
syft myorg/base-java:17-v3 -o cyclonedx-json > base-java-17-v3-sbom.json
When the base image updates, every service built on it inherits new components. Your SBOM system should propagate this: "base image base-java:17-v4 updated OpenSSL from 3.0.8 to 3.0.10 -- the following 25 services will pick this up on their next build."
Kubernetes-Native SBOM Tracking
If you're running on Kubernetes, you can correlate SBOMs with what's actually deployed in each cluster.
Tracking Deployed Versions
# What's actually running?
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}: {range .spec.containers[*]}{.image}{" "}{end}{"\n"}{end}'
Map each running image to its SBOM. This gives you a real-time view of the aggregate dependency surface of your cluster.
Admission Controllers
Use Kubernetes admission controllers to enforce SBOM policies:
# Example: reject deployments without SBOMs
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: sbom-required
webhooks:
- name: sbom.security.internal
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
operations: ["CREATE", "UPDATE"]
clientConfig:
service:
name: sbom-admission-webhook
namespace: security
The webhook checks whether an SBOM exists for the image being deployed. No SBOM, no deployment.
Runtime Verification with KubeClarity
KubeClarity (from Cisco/OpenClarity) scans running Kubernetes clusters and generates SBOMs for deployed workloads:
# Install KubeClarity
helm install kubeclarity kubeclarity/kubeclarity -n kubeclarity
# It scans running pods and generates SBOMs automatically
This is useful as a verification layer: compare the SBOMs you generated at build time with what KubeClarity finds at runtime.
Aggregated Vulnerability View
Individual service SBOMs answer "what's in service X?" The aggregated view answers:
- "Which services are affected by CVE-2024-XXXX?" -- search all current SBOMs for the affected component
- "What's our most common vulnerable dependency?" -- aggregate across all services
- "Which team owns the most unpatched CVEs?" -- map services to teams, SBOMs to CVEs
- "What's the total unique dependency count across our platform?" -- deduplicate components across services
Building this aggregation layer is where most teams either invest in a platform or build custom tooling.
DIY Aggregation
A minimal approach using a database:
# Pseudocode: ingest SBOMs into a queryable store
for sbom in incoming_sboms:
service = sbom.metadata.service_name
version = sbom.metadata.version
for component in sbom.components:
db.upsert({
"service": service,
"service_version": version,
"component_name": component.name,
"component_version": component.version,
"purl": component.purl,
"deployed": is_currently_deployed(service, version)
})
# Query: which services use log4j-core < 2.17.1?
affected = db.query(
component_name="log4j-core",
component_version__lt="2.17.1",
deployed=True
)
This works for 20 services. At 200 services with continuous deployments, you need purpose-built tooling.
Multi-Team Workflows
In a microservices architecture, different teams own different services. SBOM responsibilities need to follow ownership:
Service teams are responsible for:
- Generating SBOMs in their CI/CD pipelines
- Remedying vulnerabilities in their dependencies
- Updating base images when new versions are released
Platform team is responsible for:
- Providing base images with SBOMs
- Operating the SBOM storage and aggregation infrastructure
- Building shared CI/CD templates that include SBOM generation
Security team is responsible for:
- Monitoring aggregated SBOMs against vulnerability feeds
- Setting policies (e.g., "no critical CVEs older than 30 days in production")
- Escalating unresolved vulnerabilities to service teams
Governance Without Bottlenecks
The goal is distributed generation with centralized visibility. Each team pushes SBOMs as part of their normal workflow. The security team monitors the aggregate. Nobody blocks on a manual review.
Service A CI/CD ──> SBOM Store ──> Aggregation ──> Dashboard
Service B CI/CD ──> SBOM Store ──> ↓ ──> Alerts
Service C CI/CD ──> SBOM Store ──> Vuln Matching ──> Tickets
Service Mesh Considerations
If you're running a service mesh (Istio, Linkerd), the mesh components themselves have dependencies that need tracking. The Envoy proxy sidecar in every pod, the control plane components, the CRDs -- all of these are software with their own vulnerability surface.
Generate SBOMs for mesh components and treat them like shared infrastructure:
# SBOM for Istio sidecar proxy
syft istio/proxyv2:1.20.0 -o cyclonedx-json > istio-proxy-sbom.json
A vulnerability in the Envoy proxy affects every service in your mesh. That's a blast radius worth tracking.
How Safeguard.sh Helps
Safeguard is built for the multi-service, multi-team reality of microservices. Each service pushes its SBOM through CI/CD integration, and Safeguard provides the aggregated view across your entire architecture. Search for a vulnerable component across all services instantly. Track remediation progress by team. Set policies that flag deployments with unresolved critical CVEs. The platform handles the storage, deduplication, vulnerability matching, and alerting that would otherwise require a dedicated internal tooling effort.