Most supply chain security programs fail not because the tooling is wrong, but because the org chart is. When a CVE drops at 11pm on a Thursday, three people think it's someone else's job: the AppSec engineer assumes platform owns the base image, the platform team assumes the product owner will file the Jira, and the CISO finds out from a customer email on Saturday. We have run this exact failure mode four times across two companies, and each time the fix was the same: write the topology down, assign named owners, and put the cadences on the calendar. This piece describes the topology we now use, including the weekly rituals, the RACI for dependency disclosures, and the dollar thresholds that trigger CISO escalation. It assumes a mid-size engineering org of 200 to 800 developers; the shape holds for larger orgs, but the headcount scales roughly linearly with the number of business units.
Who owns what in a supply chain program?
Ownership splits cleanly across four teams, and each team has a single accountable director. The AppSec team owns findings in first-party code and build-time dependencies, staffed at roughly one engineer per 80 developers. The Platform Security team owns runtime infrastructure, base images, registry policies, and the build system itself, at one engineer per 150 developers. Third-Party Risk Management (TPRM) owns vendors that process data or hold production credentials, typically 2-3 analysts handling 200-400 vendors. Detection & Response owns the 24/7 on-call rotation and the incident commander role during supply chain compromises.
The seam that breaks most often is between AppSec and Platform Security when a vulnerable package appears in a base image. Our rule: whoever published the artifact owns the fix. If it is in the base image, Platform owns it. If it is added in the application Dockerfile, AppSec partners with the product team. Write this down in a one-page charter and review it quarterly; ambiguity is the attacker's friend.
What cadences keep the program running?
Five recurring meetings form the heartbeat of the program. Monday 9:30am is the Supply Chain Standup, 25 minutes, attended by AppSec leads, Platform Security lead, TPRM lead, and the Detection & Response rep, where we walk the top 20 findings by business impact. Tuesday 2pm is Vendor Review, biweekly, where TPRM presents any new vendors or recertifications. Wednesday 11am is the Policy Gate Review, weekly, covering any merges blocked by policy for more than 48 hours. Thursday 3pm is the Executive Dashboard Sync, monthly, with the CISO and VPE, running through Mean Time to Remediate (MTTR), reachable critical count, and SBOM coverage percentage. Friday 1pm is the Retrospective, biweekly alternating with Vendor Review.
Each meeting has a fixed agenda template, a scribe assigned from a rotation, and a maximum time-box. If a meeting runs short three weeks in a row, we cut it; if it runs long, we split it. This is boring, and that is the point.
How should the RACI look for a dependency disclosure?
A fresh high-severity CVE should move through a documented RACI in under four hours for criticals and under 24 hours for highs. The AppSec engineer on triage duty is Responsible for initial assessment, including reachability analysis; the AppSec director is Accountable for the remediation plan; Platform Security and affected product engineering leads are Consulted; and Detection & Response, Legal, and Communications are Informed once severity is confirmed.
For a CVSS 9.0+ finding with confirmed reachable exploitation path, the incident commander hands off within 60 minutes to a named product VP; the SLA is 48 hours to a rolled patch in production or 7 days with a compensating control. For a CVSS 7.0-8.9 reachable finding, SLA is 14 days. Unreachable highs land in a monthly backlog groomed by AppSec leads. These thresholds should be documented in a single Google doc, versioned, and signed annually by the CISO and the CTO.
When does the CISO get pulled in directly?
Three triggers escalate to the CISO within 30 minutes regardless of time of day. First, any confirmed compromise of a production signing key, build system, or artifact registry. Second, any finding that, if exploited, would result in a customer data disclosure of more than 10,000 records or a regulatory reporting obligation under SEC, GDPR, or sectoral rules like HIPAA. Third, any vendor incident that affects a Tier-1 supplier listed in the TPRM master spreadsheet, which we define as vendors whose outage would cost more than $50,000 per hour or whose breach would require customer notification.
Below these lines, the AppSec director handles escalation and provides a written summary to the CISO in the Monday Executive Dashboard. Keeping the escalation threshold explicit is what prevents alert fatigue at the top and missed criticals at the bottom.
How do you staff TPRM without it becoming a paperwork factory?
TPRM gets dragged into SOC 2 questionnaires and never comes back unless you ruthlessly segment vendors by blast radius. We run three tiers: Tier-1 vendors get an annual deep review plus quarterly check-ins and cost roughly $8,000-$15,000 of analyst time each per year; Tier-2 vendors get an annual light review at $2,000-$4,000; Tier-3 vendors get an attestation-based self-service flow costing under $500 in analyst time, backed by automated SOC 2 fetch and SBOM diffing.
Staff TPRM as one senior analyst per 40 Tier-1 vendors, one mid-level per 80 Tier-2, and rely on automation for Tier-3. The biggest unlock is refusing to onboard a vendor without a named business owner in engineering or product; that owner is the one who chases remediation when TPRM finds issues, not the TPRM analyst.
What does a healthy security champions layer look like?
Champions are the connective tissue between security and product, not a second-tier security team. Aim for one champion per 15-20 developers, recruited voluntarily and recognized with a small quarterly stipend ($500 of training budget, a conference ticket per year). Their responsibilities are bounded: triage their team's findings weekly, attend the monthly champions guild, and represent their team in the quarterly threat model refresh.
Champions should not be doing remediation work alone; their leverage is context, not capacity. When a champion consistently runs at more than 6 hours per week on security, that is a signal the team needs a real AppSec partner, not a more motivated champion.
How Safeguard Helps
Safeguard operationalizes this topology by making ownership data a first-class property of findings. Every vulnerability inherits a product, a team, and an engineering owner pulled from SCM metadata, so AppSec and Platform never argue about whose queue a ticket goes into. Reachability scoring from Griffin AI cuts the backlog by focusing triage on the 8-12% of findings that are actually exploitable, which is what lets a one-per-80 AppSec ratio work. The TPRM module centralizes vendor SBOMs, SOC 2 evidence, and attestations so your analysts spend time on Tier-1 deep reviews instead of chasing PDFs. Policy gates wired into CI enforce the RACI thresholds automatically, and the executive dashboards export straight into the monthly CISO sync with MTTR, reachable critical counts, and SBOM coverage already formatted.