Compliance

CMMC Level 3 Software Supply Chain Checklist 2026

A senior engineer's CMMC Level 3 checklist focused on software supply chain: SBOM, SC-SR controls, SSP evidence, and the operational gaps most defense contractors still have.

Shadab Khan
Security Engineer
8 min read

CMMC 2.0 Level 3 is the stringent end of the Cybersecurity Maturity Model Certification program, required for defense contractors handling Controlled Unclassified Information (CUI) at the highest sensitivity tiers. The software supply chain piece of Level 3 — pulling from NIST SP 800-171 with the SP 800-172 enhancements and the newer supply chain control set — is where most contractors still have the largest gap. I've walked through dozens of pre-assessment reviews and the failure modes are consistent: strong paper policies, weak operational evidence, and no continuous monitoring of the third-party code actually running on CUI systems.

This checklist focuses on the operational side. If you already have an SSP, a POA&M process, and a SPRS score, you know the compliance framework. What you probably need is a concrete set of technical controls and evidence patterns that survive a C3PAO assessment.

Why is Level 3 harder than Level 2 on the software supply chain?

Level 2 requires you to implement NIST SP 800-171 controls. Level 3 adds the enhanced set from SP 800-172 plus a supply chain risk management (SR) family drawn from SP 800-161, and those controls specifically target software you don't write yourself. The questions become: how do you know what's in your software, who wrote it, how it was built, and what would happen if any of those answers changed without your knowledge?

Answering those questions in an enterprise defense environment is structurally harder because your stack likely includes commercial software, open source dependencies, contractor-built applications, and government-furnished software — each with different supply chain visibility. A well-implemented SBOM program can answer for your own code and your open source dependencies, but the commercial and GFE pieces require vendor-provided SBOMs plus contractual mechanisms for when those SBOMs are incomplete or missing.

The second source of difficulty is continuous monitoring. Level 3 assessors want evidence that your supply chain posture is monitored continuously, not snapshot at the start of an assessment. A quarterly dependency scan doesn't meet the continuous monitoring bar; daily or per-build SBOM generation with drift detection does.

What SR controls apply specifically to software?

From the SR family, the controls most relevant to software are SR-3 (supply chain controls and processes), SR-4 (provenance), SR-5 (acquisition strategies and tools), SR-6 (supplier assessments and reviews), SR-10 (inspection of systems or components), and SR-11 (component authenticity). Each of these has a software-specific implementation that most contractors handle weakly.

SR-4 provenance is the one that catches teams off guard. It asks you to maintain provenance information for systems, components, and services — including open source. Most organizations can identify that they use openssl but can't produce evidence of which upstream source was used, which commit, what patches were applied, who built it, and what integrity checks were performed. A mature SBOM with build provenance (in-toto attestations or SLSA-format provenance records) closes this gap; a dependency manifest alone doesn't.

SR-11 component authenticity asks for anti-counterfeit processes. In software terms that's signature verification on every component, hash pinning in lockfiles, and denial of unsigned or unverifiable artifacts at the build gate. Level 3 assessors will ask to see the policy and the enforcement — not just the intent.

What SBOM evidence does a C3PAO actually want to see?

A C3PAO wants to see three things: SBOMs that are current, SBOMs that are accurate, and SBOMs that are operationalized. Currency means timestamped generation on every build, with archive storage that permits reconstruction of any shipped artifact's bill of materials. Accuracy means the SBOM reflects what's in the binary, not just what the build system declared — verified through secondary analysis (static binary scanning, runtime attestation, or third-party validation). Operationalized means the SBOM is wired into your vulnerability management, incident response, and supplier review processes, with logs showing it's actually being used.

The format expectations have settled on SPDX or CycloneDX, with either being acceptable as long as you're consistent. Provide JSON output, include NTIA minimum elements (supplier, component, version, dependency relationships, author, timestamp, hash), and tag each SBOM with the commit SHA and build ID it corresponds to.

For CUI-handling systems specifically, the SBOM itself is sometimes subject to handling restrictions depending on what the components reveal about your architecture. Store SBOMs in the same CUI-boundary storage you use for the underlying code, not in a lower-tier SaaS that doesn't have FedRAMP High or equivalent controls.

How do I handle vendor and third-party code I don't control?

Level 3 assessors expect you to have supplier risk processes that cover both direct suppliers (your commercial software vendors) and the broader upstream ecosystem (open source maintainers, distribution mirrors, package registries). Direct suppliers get contractual treatment — DFARS 252.204-7012 flow-downs, CMMC flow-downs where applicable, and explicit SBOM/provenance delivery requirements written into contracts.

For open source, the expectation shifts to operational controls. You need an internal mirror (or a trusted proxy) for the package registries you depend on, signature verification at the pull boundary, and a documented process for what happens when a critical dependency is compromised or abandoned. Assessors will ask how you'd respond to an xz-utils-style compromise, and they want an answer with specific named tools and owners, not a statement of intent.

TPRM integration is where the paper and the operations meet. Your third-party risk management program should be fed by your SBOM data so that a new CVE in a vendor product automatically triggers the TPRM workflow for that vendor. A TPRM program that operates on a separate data plane from your technical controls will produce inconsistencies your assessor will notice.

What does continuous monitoring look like for a supply chain?

Continuous monitoring for software supply chain has four signals you need to collect continuously: new vulnerabilities in components you use, new versions or license changes for components you use, maintainer or registry events (ownership changes, yanked packages, typosquats targeting your packages), and runtime deviations (a production host pulling code from a source your SBOM doesn't list).

For each signal, have a defined response: who's notified, what the SLA is, what evidence is logged. Assessors will sample these signals and ask to see the end-to-end trail for specific events. "We got an alert, we investigated, here's the ticket, here's the remediation, here's the closeout" is the evidence pattern. An alert with no traceable response is worse than no alert at all.

Continuous monitoring also needs to cover your CI/CD pipelines themselves. A pipeline that builds CUI-handling software must protect its own supply chain: signed commits, attested builds, hardened runners, protected artifact storage, and clear separation between the pipeline's identity and the artifacts it produces. SLSA Level 3 build integrity is the practical target here, and the gap analysis between your current pipeline and SLSA 3 is usually the fastest path to showing improvement.

What evidence patterns survive an assessment?

Evidence that survives an assessment is timestamped, immutable, and linked to the control it's meant to demonstrate. The pattern I recommend is a control-to-evidence matrix maintained as code, with each cell pointing to a generated artifact (a log query, a signed SBOM, a policy-as-code decision record) rather than a narrative document.

For each control in the SR and SA families, document: what the control requires, what system implements it, what evidence that system produces, and where that evidence is stored. An assessor asking to see compliance with SR-4 should get a URL or a CLI command, not an email thread. The SSP narrative stays high-level; the evidence is discoverable through the matrix.

Don't forget the human side of evidence. Training records, incident response exercises, tabletop outcomes, and supplier review meeting notes are all part of the package. Level 3 assessors look at whether the technical controls are backed by organizational practice, and pure technology evidence without operational records will raise questions.

How Safeguard.sh Helps

Safeguard.sh produces SPDX and CycloneDX SBOMs with in-toto provenance on every build and stores them inside your CUI boundary with immutable audit trails your C3PAO can sample directly. Griffin AI applies reachability analysis at 100-level depth so your SR control narratives can distinguish "this component is present" from "this component is executed on CUI data paths," a distinction that materially changes remediation priority under SP 800-172. Eagle monitors package registries, maintainer events, and CVE feeds continuously, pushing events into your TPRM workflow and providing the continuous monitoring evidence Level 3 demands. Container self-healing rebuilds and attests affected images when a supply chain event requires remediation, preserving SLSA-aligned provenance through the rotation. The result is a Level 3-ready supply chain posture built on operational controls, not paper commitments, with assessment evidence that maps one-to-one to the SR and SA control families.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.