I have spent the last year running the same conversation with CISOs at banks, health systems, SaaS companies, and defense contractors. The questions rhyme. The concerns are consistent. This FAQ collects the ones I hear most often, answered in the way I would answer them across a conference table — direct, opinionated, grounded in what is actually working in programs that ship.
What exactly is "software supply chain security" and how is it different from AppSec?
Software supply chain security is the discipline of securing everything your code depends on, is built with, and is delivered through — not just the code your team writes. AppSec asks "is our code safe"; supply chain security asks "is everything we inherit, compose, and ship also safe, and can we prove it."
The practical difference shows up in scope. AppSec teams historically covered first-party code: SAST on the repo, DAST on the running app, a penetration test before launch. Supply chain security extends that perimeter to third-party libraries, container base images, build pipelines, CI/CD credentials, package registries, signing keys, SBOM generation, vendor software ingestion, AI-generated code, and the provenance attestations that tie it all together. In a modern codebase, 90 to 97 percent of the bytes shipped to production are not written by your engineers. A program that only scans first-party code is securing the 3 percent.
How should I size a supply chain security program?
Budget roughly 5 to 10 percent of your total security spend on supply chain, rising to 15 percent if you are a regulated software vendor selling to the federal government or healthcare. Headcount is usually 1 to 2 dedicated engineers per 500 developers, plus tooling.
The trap I see repeatedly is treating supply chain security as a line item inside AppSec rather than a distinct function with its own roadmap. If the same person is responsible for static analysis coverage and for SBOM governance, the SBOM work will never happen, because SAST findings are tangible and SBOM governance is not. Give it a named owner and a quarterly deliverable. Even a part-time owner with a focused mandate outperforms five people who work on it "when there is time."
Who should the function report to?
In most mid-size and large organizations, supply chain security reports to the CISO through the head of AppSec or product security. In highly regulated environments — defense, critical infrastructure, healthcare EHR vendors — I have seen it elevated to report directly to the CISO or to a dedicated VP of Secure Engineering.
The worst reporting line I encounter is placing supply chain security inside a GRC organization that has no engineering influence. SBOMs and SLSA attestations get generated, filed, and never enforced. Place the function where it has the authority to fail a build or block a release, or do not bother creating it.
What are the three controls that deliver the most risk reduction per dollar?
In my experience, ranked by impact:
- A reachability-aware dependency scanner wired into pull requests, not just scheduled scans. The shift from "we found 12,000 CVEs" to "43 of these are reachable from your entry points" collapses the remediation backlog by one to two orders of magnitude.
- SBOM generation at build time with ingestion into a central inventory — not PDF exports emailed to compliance. You cannot respond to a Log4Shell-class incident in hours without a queryable inventory, and you cannot build a queryable inventory without making SBOM generation a build-pipeline obligation.
- Provenance attestations (SLSA Level 2 or 3) for the artifacts you ship. This is the control that future-proofs you against the next XZ Utils-style backdoor, because it forces build integrity into the trust model rather than leaving it implicit.
Everything else — signing, policy-as-code, package proxy, typosquat detection — is additive, but the three above are the load-bearing ones.
Do I actually need SBOMs if my customers are not asking for them?
Yes, but not for the reason you think. SBOMs are an incident-response accelerant before they are a compliance artifact. When the next zero-day drops in a widely used library at 6 p.m. on a Friday, the teams that can answer "do we ship this, and where" in ten minutes have SBOMs. The teams that take two weeks do not.
The 2021 Log4Shell response was the last time most teams got away with spreadsheet archaeology. The 2024 XZ backdoor made the case sharper. The 2025 OpenSSL chain-of-trust events removed the excuse entirely. If your director of incident response has not asked for SBOMs yet, it is because they have not had to run the playbook under pressure recently.
How do I handle AI-generated code and AI coding assistants in the supply chain model?
Treat AI-generated code as a new category of dependency with a different risk profile, not as first-party code. The developer did not write it, did not fully understand it, and often cannot explain why the assistant chose a specific library. That is structurally closer to an npm install than to a code review.
The controls that work are narrow and boring: require human authorship attestation on every commit, run reachability-aware SCA on every PR including AI-authored ones, and block the assistant from introducing new top-level dependencies without a justification. A sample policy snippet we ship:
policy: ai_authored_changes
require:
- human_reviewer: true
- sca_scan: true
- new_dependency_justification: required
block_on:
- package_age_days: "< 14"
- maintainer_count: "< 2"
- typosquat_score: "> 0.6"
This does not solve the problem. It slows it down to the point that the rest of your pipeline can catch the bad outcomes.
What should a board-level supply chain security metric look like?
One number, updated monthly, that correlates to risk: the median age of exploitable, reachable vulnerabilities in production. Not total CVE count, not scanner coverage, not "policies defined." Those metrics reward activity rather than outcomes.
Median age of reachable exploitable findings answers the only question a board cares about: how long is real risk sitting in our environment before we fix it. If that number is trending down, the program is working. If it is flat or rising, something upstream is broken regardless of how many findings the tool emitted.
Where do I start if my program is at zero today?
Start with an inventory. Not a perfect one — a working one. Generate SBOMs for your top ten revenue-generating services, ingest them into a single queryable store (even a Postgres table is fine for week one), and answer three questions: what do we ship, what is in it, and when was it last built. Everything else in the program is downstream of that inventory.
From there, add reachability analysis so you can triage, add provenance attestations so you can prove build integrity, and add policy-as-code so the controls do not live in a Confluence page. A program that does these four things in sequence — inventory, triage, provenance, policy — outperforms a program that tries to do all of them at once and finishes none of them.
How Safeguard.sh Helps
Safeguard.sh was built for exactly the program shape described above. The platform generates CycloneDX and SPDX SBOMs at build time, ingests them into a reachability-aware inventory, attaches SLSA provenance automatically, and exposes policy-as-code gates your CI can enforce on every pull request. CISOs use the Safeguard dashboards for the one board-level metric that matters — median age of reachable exploitable findings — and drill into per-service remediation status from there. If you are starting at zero, Safeguard's 30-day onboarding runbook gets the first ten services into the inventory and the policy gates live in the same month, without adding headcount to the AppSec team.