Open Source Security

Responsible Disclosure in Open Source: The Messy Reality

Responsible disclosure sounds simple in theory. In practice, coordinating vulnerability disclosure across open source projects with no budgets, no SLAs, and no obligation to respond is an exercise in patience and diplomacy.

Shadab Khan
DevSecOps Engineer
7 min read

You find a vulnerability in an open source library. What do you do next?

The textbook answer is responsible disclosure: report the vulnerability privately to the maintainer, give them time to develop a fix, coordinate public disclosure to coincide with the patch release. Clean. Professional. Orderly.

The real answer involves emailing an address that may or may not be monitored, waiting weeks for a response that may never come, negotiating a disclosure timeline with someone who maintains the project in their spare time, and hoping that the fix ships before someone else independently discovers and exploits the vulnerability.

Responsible disclosure in open source is a process built on assumptions that frequently do not hold.

The Assumptions That Break

Assumption 1: There is someone to disclose to.

Many open source projects have no security contact. No security.txt file. No SECURITY.md in the repository. No documented way to report a vulnerability privately. The researcher's options are to open a public GitHub issue (which discloses the vulnerability to everyone) or to try to find the maintainer's email address through their profile or commit history.

Some researchers give up at this point. They either disclose publicly, sit on the vulnerability, or move on to other work. Each of these outcomes is worse than a proper private disclosure would have been.

Assumption 2: The maintainer will respond promptly.

Even when a contact exists, response times vary wildly. Some maintainers respond within hours. Some respond within weeks. Some never respond at all. The maintainer may be on vacation, burned out, or no longer actively maintaining the project.

Google's Project Zero operates a 90-day disclosure deadline. If the vendor does not fix the vulnerability within 90 days of private notification, Project Zero discloses publicly. This policy works reasonably well for commercial vendors with full-time security teams. For a solo open source maintainer who has a day job and a family, 90 days may not be enough, especially if the fix requires significant refactoring.

Assumption 3: The maintainer has the expertise to fix the vulnerability.

Not every open source maintainer is a security expert. When a researcher reports a complex vulnerability, the maintainer may not fully understand the issue, may not know the correct fix, or may implement a fix that addresses the symptom but not the root cause. The Log4j incident included multiple incomplete fixes, each of which required additional CVEs and patches.

Assumption 4: Coordination with downstream consumers is feasible.

Major open source libraries have thousands of downstream consumers. Coordinating disclosure across this consumer base is logistically impossible. The maintainer can fix and release the patch, but they cannot ensure that every consumer updates. The window between patch release and universal adoption is the most dangerous period, because the vulnerability is now public but many systems are still unpatched.

The Disclosure Timeline Problem

The central tension in vulnerability disclosure is timing. Disclose too early, and attackers exploit unpatched systems. Disclose too late, and users are exposed without their knowledge for an extended period.

For commercial software, industry norms have converged around 90 days as a reasonable disclosure deadline. The vendor has 90 days to fix the issue. After that, the researcher discloses regardless.

Open source complicates this in several ways:

No SLA. Open source maintainers have no contractual obligation to respond or fix anything within any timeframe. Applying a 90-day deadline to a project maintained by a volunteer feels adversarial, even when the deadline is intended to protect users.

Coordination complexity. A vulnerability in a widely-used library may need to be disclosed to distribution maintainers (Debian, Red Hat, Ubuntu), downstream project maintainers, and cloud platform operators before public disclosure. This coordination takes time and effort that open source maintainers often lack.

Embargo management. During the period between private disclosure and public release, the vulnerability must be kept confidential. This means the fix must be developed privately, which is awkward for projects that do all development in public repositories. Some projects use private security forks or delayed-push branches. Others do not have the infrastructure for this.

Partial disclosure signals. Even without explicit disclosure, savvy observers can detect vulnerability fixes. A commit that rewrites input validation logic or adds bounds checking signals a potential vulnerability. Attackers monitor commit histories of popular projects for exactly these signals. The fix itself becomes a partial disclosure.

The CVE Process

The Common Vulnerabilities and Exposures (CVE) system provides standardized identifiers for vulnerabilities. For open source projects, obtaining a CVE identifier adds legitimacy to a vulnerability report and helps downstream consumers track the issue.

But the CVE process has its own friction. CVE Numbering Authorities (CNAs) assign identifiers, and the process can take days or weeks. Some open source projects are their own CNAs (Apache, for example), which streamlines the process. Others must request CVEs through MITRE or another CNA, which adds delay.

The quality of CVE entries varies. Some include detailed technical descriptions, affected versions, and remediation guidance. Others are sparse to the point of uselessness. For open source projects without dedicated security staff, writing a thorough CVE entry is one more task competing for limited maintainer time.

What Works

Despite the challenges, some practices improve disclosure outcomes in open source:

SECURITY.md files. A simple file in the repository root that documents how to report security issues. It should include an email address, a PGP key (if applicable), and expected response timelines. GitHub prompts repositories to add this file. More projects should.

Security-focused mailing lists. A private mailing list for security reports, monitored by multiple maintainers, is more resilient than a single email address. If one maintainer is unavailable, others can respond.

Pre-arranged coordinator relationships. Projects can designate a CERT or other coordination body as a disclosure coordinator. CERT/CC, for example, has a process for coordinating multi-party vulnerability disclosure.

Transparent policies. Documenting the project's disclosure policy, including expected timelines and processes, sets clear expectations for researchers. This reduces friction and misunderstandings.

Downstream notification lists. Major distributions and consumers can be notified in advance of public disclosure. The Linux kernel's security team maintains such a list. Other projects should consider similar arrangements.

The Researcher's Dilemma

Researchers who find vulnerabilities in open source face a genuine dilemma. They have invested time and skill to find an issue that affects potentially millions of users. They want the issue fixed. They may also want recognition, a CVE, or a bounty payment.

If the maintainer is responsive and cooperative, the process works. If the maintainer is unresponsive, the researcher must decide: Wait indefinitely? Disclose publicly? Walk away?

There is no universally correct answer. The researcher's ethical obligation to protect users must be balanced against the practical reality that public disclosure without a patch helps attackers more than defenders.

The best approach is usually graduated escalation. Start with private disclosure to the maintainer. If there is no response within a reasonable period (30 days is common), escalate to a coordination body like CERT/CC. If that also fails, consider limited public disclosure that describes the risk without providing exploit details.

How Safeguard.sh Helps

Safeguard.sh monitors vulnerability disclosure channels across the open source ecosystem, including CVE databases, project-specific advisory channels, distribution security trackers, and researcher disclosure feeds. When a vulnerability is disclosed in a project in your dependency tree, regardless of which disclosure channel it appears on first, Safeguard.sh ensures your security team knows about it immediately. We bridge the gap between the messy reality of open source disclosure and the operational need for timely, actionable vulnerability intelligence.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.