Every quarter I get the same call from someone who has just been told "you are now responsible for application security, figure it out." Sometimes the caller is a senior engineer who inherited the mandate. Sometimes it is a first-time security leader at a series B startup. Sometimes it is a CISO at a mid-market company that has operated on pentesting-plus-hope for a decade. The questions are the same, and the answers have barely changed in two years. This FAQ is what I tell them.
What does "AppSec program" mean in 2026?
An AppSec program is the operational discipline of finding, triaging, fixing, and preventing vulnerabilities in the software your organization ships — including the third-party code you depend on and the AI-authored code your engineers increasingly commit. It is not a single tool, it is not a pentest calendar, and it is not a compliance artifact.
The scope has expanded in the last five years. In 2020, a competent AppSec program covered SAST, DAST, SCA, and a light pentesting cadence. In 2026, the baseline adds SBOM generation and governance, reachability-aware vulnerability triage, AI-authored code controls, container and IaC security, and at least nominal secrets hygiene. The surface is larger and the attacker economics are worse. A program built on 2020 assumptions will fail.
What is the first 90-day plan?
Days 1-30: inventory. You cannot secure what you cannot enumerate. Generate SBOMs for your top revenue-generating services, get a list of every repository your organization produces code in, identify the production deployment surface, and build a one-page map of what actually ships. This is unglamorous and the most important work you will do all year.
Days 31-60: wire in one scanner well. Pick one — SCA is usually the highest-leverage — and integrate it into CI with pull-request blocking on new critical findings. Not scheduled scans, not weekly email reports: inline PR checks that prevent new bad code from landing. This is the single highest-ROI control in any AppSec program.
Days 61-90: define the triage workflow. Where do findings go, who owns them, what is the SLA, and what is the escape valve when engineering pushes back. Write this down. Get it signed off by engineering leadership. Publish the SLAs. Most programs skip this step and spend year two wondering why nothing gets fixed.
How much headcount do I need?
Rule of thumb: 1 AppSec engineer per 100 developers in the first year, moving to 1 per 200 developers as automation matures. A greenfield program at a 500-engineer company should plan for 3-5 AppSec hires over 18 months, not 1 hire and a tool.
The failure mode I see constantly is a single AppSec hire given a budget, a mandate, and no peers. That person ships a SAST integration, generates findings, cannot process the triage load, burns out in 9-14 months, and leaves. The program resets. You do not want this. Budget for a small team, even if you stagger the hires.
If you cannot get the headcount, reduce scope aggressively. A program that covers five services well is more useful than a program that covers all fifty services badly.
What tools should I buy first?
One reachability-aware SCA, one SAST, and one SBOM generator. In that priority order.
SCA first because supply chain is where 90+ percent of your exploitable exposure lives. Reachability-aware because without it your triage queue will be 10,000 findings deep and you will not fix any of them. SAST second because first-party code still matters and the tooling is mature. SBOM generator third because once you have an inventory, every future response — incident, audit, customer request — gets faster.
A sample tooling sequence for a 500-engineer company with a $300K-$500K annual tool budget:
Q1: Reachability-aware SCA, CI-integrated, PR-blocking on criticals
Q2: SAST, integrated into code review, focus on high-blast-radius repos
Q3: SBOM generation at build time, centralized inventory, VEX workflow
Q4: Secrets scanning, IaC security, container image scanning
Defer DAST, API security, and RASP to year two unless you have a specific trigger (external-facing API surface, PCI obligations, existing breach).
Should I build or buy?
Buy detection, build workflow. Every team that tries to build their own SAST or SCA engine ends up maintaining a worse version of what a vendor sells. Every team that tries to buy their own triage workflow ends up with a brittle system that does not match their engineering culture.
The AppSec engineering team's most valuable output is glue: the PR-comment bot, the Slack escalation, the Jira ticket enrichment, the dashboard that ties findings to service ownership. This is where the program's character lives and where the differentiation is real.
What metrics do I report?
Three metrics, reported monthly, on one page:
- Median age of exploitable reachable findings in production. This is the one that matters.
- Mean time to remediate a critical finding, from detection to merged fix.
- PR-blocking coverage: percent of production services with SCA + SAST gating in CI.
Avoid reporting "total findings" as a primary metric. It rewards scanner verbosity, not risk reduction. A program that shows a 40 percent reduction in median age of reachable findings is winning; a program that shows a 40 percent reduction in total findings is probably just suppressing more.
How do I get engineering to actually fix things?
Make the fix cheaper than the delay. That is the entire principle. Every friction point between detection and merge is where the program loses. Auto-generated pull requests, pre-filtered triage queues, reachability annotations that tell engineers "this one actually matters" — these are the differences between a 12-month remediation cycle and a 3-week one.
The anti-pattern is creating a Jira ticket and hoping. Engineers have 40 tickets already. Yours will be the 41st. If you want the finding fixed, land a pull request with the fix attached, annotated with "reachable from your HTTP handler at line 42, please merge."
What are the common traps I should avoid?
Three that kill programs in year one. First, buying too many tools too fast. Each new scanner adds a new triage queue and the team drowns. Second, optimizing for coverage before optimizing for flow. It does not matter that you scan all 300 repos if you cannot remediate in the ten that matter. Third, confusing policy documents with controls. A Confluence page that says "critical findings must be fixed in 7 days" is not a control; a CI job that blocks merge on new critical findings is.
A fourth trap worth naming: treating AppSec as separate from supply chain security. In 2026 they are the same function with two labels. Organize them together or you will pay the coordination tax forever.
How do I handle AI-authored code in the program from day one?
Pretend it is third-party code and apply the same controls. Require human reviewer attestation, run SCA on every PR including AI-authored ones, and block the assistant from introducing new top-level dependencies without justification. Do not try to build AI-specific tooling in year one; you will not have time, and the generic controls cover most of the risk.
The program's position should be that AI-authored code is welcome and gated, not banned and unregulated. Banning drives it underground. Gating keeps it visible.
How Safeguard.sh Helps
Safeguard.sh is built for the greenfield AppSec program. The platform combines reachability-aware SCA, SAST, SBOM generation, and policy-as-code into a single CI-integrated workflow, so the first 90 days of program building collapse into a single onboarding. PR-blocking checks, auto-generated fix pull requests, and triage queues pre-filtered by reachability mean a small AppSec team can operate across hundreds of services. The dashboard reports the three metrics that matter — median age of reachable findings, MTTR, and PR-blocking coverage — out of the box, so the first board report is one query rather than a quarter of spreadsheet assembly. If you are standing up a program from zero, Safeguard's onboarding runbook gets the highest-leverage controls live in the first month, without hiring a team of five.