Organizational Security

Vulnerability Disclosure Policy Template

A practical template for creating a vulnerability disclosure policy, with guidance on safe harbor provisions, response timelines, and researcher relationships.

Yukti Singhal
Security Researcher
7 min read

A vulnerability disclosure policy (VDP) tells security researchers how to report vulnerabilities in your products and what to expect when they do. Without one, researchers either don't report vulnerabilities (bad for you), report them publicly without warning (worse for you), or get threatened with legal action for trying to help (worst for everyone).

Every organization that builds software should have a VDP. This isn't optional anymore. The Department of Justice updated its Computer Fraud and Abuse Act prosecution guidelines to acknowledge good-faith security research. CISA's Binding Operational Directive 20-01 requires federal agencies to have VDPs. Industry standards like ISO 29147 define vulnerability disclosure processes.

More importantly, researchers will find vulnerabilities in your software whether you have a policy or not. The VDP determines whether they tell you about them or tell Twitter.

Why Your Organization Needs One

If your objection is "we're too small for a VDP," consider:

  • Security researchers scan the entire internet. They will find your application.
  • Without a VDP, researchers assume you'll be hostile to reports. Many will simply walk away.
  • Your competitors probably have one. Researchers preferentially test and report to organizations with clear policies.
  • A VDP costs nothing to implement. It's a web page and an email address.

If your objection is "we don't want people testing our systems," that's not how this works. Researchers test systems regardless. The VDP gives you a channel to receive the results.

The Template

Here's a disclosure policy template you can adapt. I'll explain each section afterward.


Vulnerability Disclosure Policy

[Company Name] takes the security of our systems and user data seriously. We welcome and appreciate reports from security researchers who help us maintain the security of our products and services.

Scope

This policy applies to the following systems and services:

  • [List specific domains, applications, APIs, or describe scope generally]

The following are explicitly out of scope:

  • Physical security testing
  • Social engineering of employees or contractors
  • Denial of service attacks
  • Third-party applications and services we don't control
  • [Any other exclusions]

Reporting a Vulnerability

To report a vulnerability, please email [security@company.com] with:

  • Description of the vulnerability
  • Steps to reproduce
  • Affected system(s) and version(s) if applicable
  • Potential impact
  • Any supporting evidence (screenshots, logs, proof of concept)

If you need to send sensitive information, our PGP key is available at [URL].

We will acknowledge receipt within [3 business days] and provide an initial assessment within [10 business days].

What We Ask

  • Give us reasonable time to address the vulnerability before public disclosure. We request [90 days] from the date of report.
  • Make a good-faith effort to avoid accessing, modifying, or destroying data belonging to others.
  • Do not exploit the vulnerability beyond what is necessary to demonstrate it.
  • Do not access, download, or modify data that does not belong to you.
  • If you inadvertently access another user's data, stop testing, delete any stored data, and notify us immediately.

Safe Harbor

When conducting vulnerability research consistent with this policy, we consider this research to be:

  • Authorized in accordance with the Computer Fraud and Abuse Act (CFAA), and we will not initiate or support legal action against researchers for accidental, good-faith violations of this policy.
  • Exempt from the Digital Millennium Copyright Act (DMCA), and we will not bring a claim against researchers for circumvention of technology controls.
  • Exempt from restrictions in our Terms of Service that would interfere with conducting security research, and we waive those restrictions on a limited basis for work done under this policy.
  • Lawful, helpful to the overall security of the Internet, and conducted in good faith.

We will not pursue legal action against security researchers who report vulnerabilities in good faith and comply with this policy.

Our Commitment

  • We will respond to reports in a timely manner.
  • We will work with reporters to understand and validate their findings.
  • We will remediate confirmed vulnerabilities according to our risk assessment.
  • We will credit reporters in our security advisories unless they prefer to remain anonymous.
  • We will not take legal action against researchers who act in good faith accordance with this policy.

Recognition

We maintain a security acknowledgments page at [URL] to recognize researchers who report valid vulnerabilities. We [do/do not] offer monetary rewards. [If you have a bug bounty program, link to it here.]


Section-by-Section Guidance

Scope

Scope definition is critical. Researchers need to know what they can test without risking legal action or wasting time on systems you don't own.

Be as specific as practical. "*.company.com" is better than "our systems." Listing specific applications and APIs is better still. But also be realistic about what you can enumerate. If you have hundreds of microservices, a broad domain scope with specific exclusions may be more practical.

Common exclusions:

  • Third-party services (you can't authorize testing of someone else's infrastructure)
  • Non-production environments that might not reflect production security posture
  • Specific vulnerability types that don't interest you (spam, self-XSS, etc.)

Reporting Channel

An email address is the minimum viable reporting channel. Ideally, provide:

  • A dedicated security email (security@company.com, not support@)
  • PGP key for encrypted reports
  • A web form for structured submissions
  • Links to your bug bounty platform if you use one

Monitor the reporting channel. A report that sits unread for weeks defeats the purpose.

Response Timelines

Commit to specific timelines and honor them:

  • Acknowledgment: 1-3 business days. Just confirm you received the report.
  • Initial assessment: 5-10 business days. Tell the reporter whether you've confirmed the vulnerability and initial severity assessment.
  • Remediation timeline: Varies by severity. Communicate expected timelines and update if they change.
  • Disclosure coordination: 90 days is the standard disclosure timeline, aligned with industry norms established by Google's Project Zero and others.

If you can't meet a timeline, communicate that proactively. Researchers are generally understanding about delays if you're transparent about them.

Safe Harbor

The safe harbor provision is the most important section of the policy. Without it, researchers face legal risk when reporting vulnerabilities.

Strong safe harbor language:

  • Explicitly states that good-faith research under the policy is authorized
  • References specific laws (CFAA, DMCA) to provide legal clarity
  • Commits to not pursuing legal action against compliant researchers
  • Distinguishes between the VDP and terms of service

Weak safe harbor language ("we may consider not taking legal action") is worse than nothing because it creates ambiguity that chills researcher participation.

Bug Bounty vs. VDP

A VDP and a bug bounty program are different things:

VDP: "Tell us about vulnerabilities. We promise not to sue you and we'll fix them." No payment required.

Bug bounty: "Tell us about vulnerabilities. We'll pay you based on severity." Requires budget.

Start with a VDP. Add a bug bounty when you have the budget and the process maturity to handle the volume of reports that bounties attract.

Common Pitfalls

No safe harbor. Without safe harbor language, researchers who care about legal risk (the good ones) won't participate.

Overly restrictive scope. If you exclude everything interesting, researchers won't bother.

Unmonitored reporting channel. A VDP with an unmonitored email address is worse than no VDP because it promises a response that never comes.

Legal team overreach. Legal teams sometimes insist on language that weakens safe harbor or adds threatening clauses. Push back. The legal risk from hostile VDP language (researchers going public instead of reporting) exceeds the legal risk from safe harbor provisions.

No internal process. A VDP without an internal process for triaging, validating, and remediating reports creates a queue of unaddressed vulnerabilities. Build the process before publishing the policy.

How Safeguard.sh Helps

Safeguard.sh complements your vulnerability disclosure policy by ensuring that internally discovered supply chain vulnerabilities are tracked with the same rigor as externally reported ones. When a researcher reports a dependency vulnerability, Safeguard.sh immediately maps the blast radius across your entire application portfolio, showing you every system affected. The platform tracks remediation progress and provides the data you need to communicate timelines back to researchers. A well-run disclosure program requires knowing your own software composition, and Safeguard.sh provides exactly that visibility.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.