Best Practices

Bounty Program Scoping for Dependencies

How to scope a bug bounty program when most of your attack surface lives in third-party dependencies — with guidance on payouts, triage, and upstream coordination.

Shadab Khan
Security Engineer
8 min read

Most bug bounty programs were designed for a world where the bug was in your code. You wrote a web application, a researcher found an IDOR in your permissions check, you paid out, you patched. The scope was the code on your repositories and the services on your domains. It was a tidy arrangement.

That world is mostly gone. The vast majority of modern applications are more dependency than first-party code, and the dependencies carry a disproportionate share of the exploitable surface. When a researcher finds a critical bug in a library you ship but did not write, the original bounty scope does not know how to answer the question "is this in scope?" This post is about how to answer that question in advance, in writing, in a way that holds up when the first dependency report arrives.

The Default Is Silence, and Silence Is Costly

Most bounty programs we have looked at say nothing about dependencies. The scope lists domains and repositories, describes in-scope vulnerability classes, and stops there. A researcher who finds a vulnerability in a library you depend on has to guess whether it qualifies. The guesses go in both directions: researchers who assume "yes" and feel cheated when rejected, researchers who assume "no" and take the finding elsewhere, researchers who assume "maybe" and open a ticket asking, which creates work for everyone.

The cost of not addressing dependencies in scope is larger than the cost of addressing them badly. A program that explicitly excludes dependencies is clear, even if it is not generous. A program that accepts dependencies with specific rules is better. A program that says nothing is the worst of the three.

A Taxonomy of Dependency Reports

Dependency findings come in recognisable shapes. Writing a scope that treats them as a single category leads to bad decisions. The taxonomy that has worked for us:

A known CVE, already public, already patched upstream, not yet patched in your deployment. The finding is not novel security research; it is a patch management observation. These are valuable but should not be paid at research rates.

A known CVE, public and patched upstream, where you have shipped a specific configuration or integration that bypasses the fix. The finding is about your first-party code's interaction with the dependency. These are in-scope of a normal program.

An unknown vulnerability in a dependency, discovered by the researcher, where your product is a plausible exploitation target. The finding is novel research in someone else's code that matters to you. The payout question is whether you are the right audience for the disclosure.

An unknown vulnerability in a dependency, discovered by the researcher, where your product merely includes the library. The finding is novel research that is not specifically about you. The fair answer is usually "report it upstream; we will handle updates when a fix ships."

A supply-chain compromise of a dependency — typosquatting, account takeover, malicious code in a version you use. The finding is about the integrity of the dependency supply, not a vulnerability in code. Payout rules here are particularly murky and need their own treatment.

Each category has different stakeholders, different payout calculus, and different remediation paths. Conflating them forces bad decisions.

What To Pay For, And What Not To

The payout question is where most bounty scoping conversations get stuck. A defensible starting point:

Known CVEs in public databases, reported after the CVE was published, receive at most a small fixed payout or acknowledgement. The research work was not done by the reporter. Paying research rates for public CVE reporting creates perverse incentives and drains budget without adding signal.

Configuration-specific bypasses of dependency fixes — where your code is the reason the vulnerability is still exploitable despite an upstream patch — are paid at first-party rates. The finding is genuinely about your product.

Novel vulnerabilities in dependencies, discovered by the researcher, are paid if the report includes a working exploit against your product specifically. This rule draws a clean line: "you found something new in a library, and you demonstrated how it reaches production in our deployment." Researchers who only have the abstract vulnerability are redirected upstream.

Supply-chain integrity findings are paid when they affect versions you currently deploy or have deployed recently. A malicious package in a version you have pinned for three months is unambiguously your problem. A theoretical typosquat against a package you do not use is not.

These rules are defensible because they align incentives: researchers are rewarded for work that specifically helps you, not for arbitrage of public information.

Coordinating With Upstream Maintainers

A dependency finding that is accepted in your program usually has an upstream dimension. The library maintainers need the fix, too, often more urgently than you do. Coordinating with them is both ethically necessary and practically useful.

The pattern we have settled on:

For novel vulnerabilities, we brief the upstream maintainer under embargo, in parallel with triaging the report. The researcher is kept in the loop on timeline expectations. The upstream fix becomes the mechanism by which the vulnerability is closed across the ecosystem; our first-party mitigation is interim.

For public CVEs, we coordinate patching with the maintainer if we are contributing the fix, or we consume their published patch if they are fixing it themselves. Our role is consumer and ally, not competitor for credit.

For supply-chain compromises, upstream coordination is often with the package registry (npm, PyPI, Maven Central) rather than the original maintainer, especially when the maintainer has lost control of their account. The registries have their own processes; learning them before you need them is worthwhile.

Credit in the eventual advisory matters to researchers. A bounty program that accepts a report and fails to credit the researcher in the resulting public advisory damages its own reputation. When coordinating upstream, advocate for the reporter's credit explicitly.

Triage Load and SLA Commitments

Dependency reports come in volume. Automated CVE scanners ping researchers the moment a new CVE is published in a language ecosystem you use, and researchers forward every alert to every company they can find. A program with a tight SLA ("we respond within twenty-four hours") can drown in these.

The mitigations:

Explicit scope language filters reports at intake. If the scope says "unpatched known CVEs are out of scope unless accompanied by a working exploit against our deployment," researchers can self-triage and the intake workload drops.

Automated deduplication against your own vulnerability management system. If you are already tracking a CVE internally and have a known remediation plan, the report is a duplicate. Have tooling that surfaces this immediately.

Staged SLAs. An initial acknowledgement within twenty-four hours is a reasonable promise. A full triage decision within twenty-four hours is not, for dependency reports, and promising it will break the program. Separate acknowledgement from decision.

Common Pitfalls

Observed failure modes from real programs:

Paying duplicates. A bounty paid for a CVE you already knew about, filed by a researcher who scanned public databases and guessed your stack, trains researchers to scan more and guess more. Duplicate detection has to be rigorous.

Paying for theoretical risk. A researcher demonstrates that a library in your stack has a dangerous configuration option and that, if you used that option, you would be vulnerable. If you do not use it, no payout. Programs that pay for theoretical risk encourage padded reports.

Inconsistent scope enforcement. One triager accepts a dependency report; a different triager rejects an identical one the next month. Researchers notice, and program reputation suffers.

No escalation path for ecosystem-scale findings. A researcher finds a vulnerability that affects your program, a peer company's program, and a shared dependency. Your program accepts the report; the peer's rejects it. The researcher burns goodwill with both. A short, documented process for "this is bigger than us" disclosures helps.

Advisory silence. A bug is fixed, the researcher is paid, and no public advisory is issued because "it was just a dependency update." Researchers building reputation need public credit for their work. Silent fixes are, to a researcher, unpaid work.

How Safeguard Helps

Safeguard provides the dependency visibility a bug bounty program needs to make triage decisions quickly and consistently. The platform tracks every dependency in every deployed version of your software, correlates incoming researcher reports against existing CVE tracking, and answers "are we actually using this, in this version, exposed this way?" in seconds rather than hours. Supply-chain findings are mapped against Safeguard's continuously-updated package integrity signals, so typosquat reports and account-takeover compromises can be triaged with context. When a dependency bounty is accepted, Safeguard coordinates the internal remediation workflow, the upstream disclosure timeline, and the eventual advisory publication in one place, so the program scales without the triage load becoming unmanageable.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.