There is a finite amount of attention a developer can spend on security in a day. Spend it well and you build a culture where developers catch problems early, file accurate findings, and treat the security team as a partner. Spend it poorly and you build a culture where developers automate around your tooling and your dashboards lie to you about coverage.
Most security teams have not formalized this trade-off. They add tools as new risks emerge, layer policies as new compliance frameworks land, and assume the cumulative cost on developer attention is someone else's problem. By the time the cost shows up — in attrition, in shadow IT, in CI workflows commented out for "speed" — it is too late to walk back.
This post is a framework for budgeting developer friction across supply chain tooling. It draws on patterns we use with Safeguard customers and on the broader experience of teams that have learned the hard way what overspending looks like.
What a friction budget is
A friction budget is a deliberate, finite allocation of developer attention to security activities. Like a financial budget, it has categories, limits, and a review cycle. Unlike a financial budget, the unit of measure is fuzzy — but the discipline of forcing a number gets you most of the way there.
The simplest formulation is time-based. How many minutes per developer per day are you willing to spend on supply-chain-related interruptions? Reasonable numbers we see across mature programs land between 5 and 15 minutes per developer per day. Less than 5 and you are probably not catching anything. More than 15 and you are likely paying a hidden tax in productivity and morale.
The interesting decision is how to spend that budget. A team that spends all 15 minutes on PR-time gates will block more merges and deliver more security signal at the merge boundary, but it will also trigger more last-mile fixes and more frustration. A team that spends 12 minutes on IDE-time hints and 3 minutes on PR-time gates will catch problems earlier but might miss the merge-time edge cases. Neither allocation is wrong. The choice depends on the maturity of the team and the kind of risk they are managing.
The categories of friction
Friction shows up in five places, and a budget needs to cover all of them.
The interruption tax is the cost of being pulled out of flow. Every IDE diagnostic, every PR comment, every Slack alert from the security bot is an interruption. Even if the developer dismisses it in 30 seconds, the context-switch cost is closer to 5 minutes. Budget interruptions like a household budgets eating out — necessary, valuable, but worth tracking.
The blocking tax is the cost of being unable to proceed. A failing CI check, a refused merge, a denied deploy. This is more expensive than an interruption because the developer cannot route around it. Budget blocking like a household budgets emergency repairs — rare, justified, and only for high-confidence cases.
The decision tax is the cost of being asked to make a security judgment. Should this dependency be allowed? Is this license compatible? Is this transitive risk acceptable? Each decision pulls the developer out of their domain and into yours. Budget decisions like a household budgets meetings — necessary, but try to batch them.
The administrative tax is the cost of filing waivers, requesting exceptions, and updating tickets. Even a clean override path costs developer time. Budget admin like a household budgets paperwork — minimize it, automate it, but accept it exists.
The learning tax is the cost of understanding the tooling itself. New CLIs, new dashboards, new policies. This is the one most teams forget to budget for, and it compounds with every tool added. Budget learning like a household budgets education — front-load it, then expect it to fall.
Where the budget gets blown
Most over-budget situations come from two patterns.
The first is redundant gates. The IDE flags a finding. The CLI flags the same finding. CI flags it again. The PR bot comments on it. Slack pings the security channel. Each of these makes sense in isolation, but together they spend the developer's attention on the same finding five times. The fix is to designate one surface as the authoritative voice for each class of finding and let the others stay quiet.
In the Safeguard configuration we recommend, IDE-time owns "what should I add" decisions, CLI owns "let me check before I push" decisions, and PR-time owns "is this safe to merge" decisions. Each surface only fires for the decisions it owns. A finding that fired in the IDE does not fire again in CI — the policy engine remembers it has been seen and acknowledged.
The second pattern is policy maximalism. Security teams write policies for every conceivable risk because each risk feels important in isolation. A 200-rule policy fires constantly, generates a flood of findings, and trains developers to treat every finding as noise. The fix is to be ruthless about which rules block and which rules merely inform. Most policies should advise. Only a small core should block.
Safeguard's policy templates ship with a recommended split: roughly 10 percent of rules are blocking, 30 percent are advisory, and 60 percent are passive — they record the finding for analytics but do not surface it to the developer. Teams that adopt this split see waiver rates drop and fix rates rise within a quarter.
The 80/20 of the budget
For most teams, 80 percent of friction reduction comes from four interventions.
Cache aggressively. The same scan should not run on the same lockfile twice. Most CI pipelines re-run SCA on every push, even when nothing in the dependency graph changed. Caching the result and only re-scanning on lockfile change saves minutes per push and removes the temptation to disable scanning for "speed."
Group findings. A PR that introduces 12 advisory findings against the same package should produce one comment, not 12. Group at the package level, not the CVE level, because that is how developers think about their dependencies. The Safeguard PR bot defaults to package-level grouping for this reason.
Pre-resolve fixes. If a finding has a known safe upgrade path, present the fix in the same surface as the finding. The Safeguard CLI exposes safeguard fix to apply suggested upgrades, and the PR bot includes a one-click "apply suggested fixes" button that opens a follow-up PR. The developer never has to research the fix themselves.
Time-bound waivers. A waiver that lives forever becomes a part of the codebase that nobody owns. Default waivers to 30 days. Auto-expire them. Send a notification to the original requester before expiration. The discipline of revisiting waivers periodically catches the cases where the underlying issue has been fixed and the waiver is no longer needed.
Measuring friction
A budget you cannot measure is wishful thinking. Friction is fuzzy but it is not unmeasurable. Three signals are robust enough to track over time.
The first is time-from-finding-to-fix, broken down by surface. If IDE-time findings close in minutes and PR-time findings close in days, your IDE integration is doing the work. Watch the trend, not the absolute number.
The second is waiver rate, broken down by team and repository. A repository where 30 percent of findings are waived is telling you the policy does not match reality. A repository where 0 percent are waived might be ignoring the tool entirely.
The third is developer sentiment, captured through a short quarterly survey. Five questions, multiple choice, anonymous. Ask whether the security tooling helps them ship safely, whether it slows them down, and whether they would recommend it to a teammate. Track the trend.
Safeguard exposes the first two automatically in the program analytics dashboard. The third is on the security team to run, but the data it produces is the most useful single signal for whether your friction budget is realistic.
The annual review
Like any budget, a friction budget needs a review cycle. Once a year is enough for most organizations.
The review covers three questions. Did we spend the budget on the right risks — are the rules that fired the most also the ones that prevented the most incidents? Did the budget grow without notice — did we add tools or rules without removing any? And do developers feel the budget is reasonable — does the sentiment data match the activity data?
Teams that run this review consistently end up with smaller, sharper supply chain programs that catch more real risk at less developer cost. That is the goal: not to minimize security activity, but to ensure every minute of developer attention spent on security is buying real safety.