The first time my team sent a responsible disclosure email about a zero-day our pipeline had surfaced in a transitive dependency, we got back a one-line reply asking whether we were sure. The maintainer was not being dismissive. They had spent the last three years receiving drive-by vulnerability reports from automated tools that turned out, on inspection, to be hallucinated. They had stopped reading the reports closely because the signal-to-noise ratio had collapsed. Our report, which included the cross-package taint path, the CWE hypothesis, and the disproof attempt that failed, was the first one in months that had given them anything substantive to read. They patched within four days.
That exchange is the reason I think the most underdiscussed part of zero-day discovery in 2026 is the disclosure process. The discovery is the technical achievement. The disclosure is the operational discipline that determines whether the discovery actually translates into a safer ecosystem. Most teams that start running modern discovery pipelines underestimate this. They assume the hard work is finding the bug. The hard work is everything that happens after.
What responsible disclosure means in this context
Responsible disclosure is the practice of reporting a vulnerability to the party that can fix it, in a way that gives them adequate time to ship a fix before public details circulate. It is older than the modern security industry and predates most automated discovery tooling. The reason it works at all is that it is built on trust between the reporter and the recipient. The reporter agrees to hold details until a coordinated disclosure date. The recipient agrees to take the report seriously and to engage in good faith.
Both halves of that bargain are under pressure when reports start flowing from automated pipelines. The recipient has no easy way to distinguish a grounded report from a hallucinated one without doing the work themselves. The reporter has no obvious way to demonstrate grounding beyond attaching the artefacts the pipeline produces. The engine-plus-Griffin AI pipeline I have come to rely on closes most of this gap, but the gap closes only if you actually attach the artefacts.
The artefacts that matter to a maintainer
A useful disclosure report contains five things. The first is the package and version range affected. This sounds trivial, but a surprising number of automated reports get it wrong by omitting transitive context or misnaming the artefact. The second is the entry point in the consumer's code, or a synthetic reproducer, that demonstrates how attacker-controlled data reaches the vulnerable function. The third is the inter-procedural taint path through the package, with line numbers. The fourth is the CWE class and the exploit conditions the hypothesis depends on. The fifth is the disproof attempt that the pipeline ran and the reason it failed. Together these constitute a coherent argument that the bug is real and reachable, not a speculative narration.
Maintainers who receive a report containing those five things typically respond within days, not weeks. The artefacts shift the cost of verification from the maintainer back to the pipeline. The maintainer no longer has to read the package code from scratch and try to construct the exploit themselves. They can verify the claim by walking the path the report supplied.
Timing
The industry standard is some flavour of 90-day coordinated disclosure, popularised by Project Zero and adopted with variations by most of the major research organisations. The 90-day window assumes the recipient has the resources of a commercial software vendor. Most open-source maintainers do not. They have a day job, a backlog of unrelated issues, and no security on-call rotation.
In practice I extend the window when the recipient is a single maintainer or a small project. The first communication asks the maintainer how long they need. Most ask for between 30 and 120 days. The longest I have agreed to is 270 days, for a foundational library where the fix required a breaking API change and the maintainer wanted to align it with their release cadence. The shortest was 14 days, for a clear-cut SQL injection in a library where the maintainer had a fix ready within a day and just wanted time to coordinate the release across downstream consumers.
The point is not the specific number. The point is that the timing is a negotiation, not a unilateral declaration, and the negotiation has to be conducted in good faith by both sides.
Embargo discipline
Once the timing is agreed, the embargo has to be respected. This is harder than it sounds when the discovery pipeline is integrated into your build system, because the finding is sitting in a ticketing system that engineers can read. I have seen organisations leak embargoed findings inadvertently because an engineer wrote a public commit message referencing the bug, or because a status update in a Slack channel got screenshot and shared.
The discipline that works is to keep embargoed findings in a restricted view, mark them clearly, and instruct everyone with access on what they can and cannot say outside the view until the embargo lifts. Tooling helps. So does training. The thing that does not help is treating embargo discipline as somebody else's problem.
What to do when the maintainer is unresponsive
This happens. A package is abandoned, the maintainer has lost interest, the project has not seen a commit in two years. You sent a report and got nothing back. The conventional approach is to wait the agreed-upon period, then disclose publicly with a recommendation to migrate away from the package. There are nuances. If the package is widely used and the bug is severe, you may want to coordinate the disclosure with the maintainers of downstream packages so they can publish migration guidance simultaneously. If the package is obscure, a public advisory may not move the needle, and the better outcome is often a fork with the patch applied, published under a clear name.
In every case, the disclosure has to actually happen. Sitting on an unfixed zero-day in an abandoned package indefinitely is not responsible. It is just delayed.
Coordinating with public CVE assignment
Once the embargo lifts, the bug should get a CVE identifier. The MITRE process is now reasonably efficient, and the major numbering authorities (GitHub Security Advisories among them) can issue identifiers within days. Coordinating the CVE assignment with the maintainer's release of the patched version reduces confusion downstream, because consumers can search for the CVE in their SCA tools and find the upstream fix in the same pass.
This step is also where your discovery pipeline starts feeding back into the conventional CVE-driven world. The zero-day you found becomes a public CVE that other organisations' SCA scanners will start matching. The ecosystem moves slightly safer, and your organisation's contribution is recorded in the advisory.
How Safeguard Helps
Safeguard manages the responsible disclosure lifecycle for the zero-days surfaced by the engine-plus-Griffin AI pipeline. Each finding ships with the artefact bundle a maintainer needs: package and version range, taint path with line numbers, CWE hypothesis, exploit conditions, and the failed disproof attempt. The platform drafts the disclosure email, tracks embargo timing, manages the secure communication channel with the maintainer, and coordinates the CVE assignment when the embargo lifts. Your team gets the discovery pipeline and the disclosure operations as a single integrated workflow, instead of a discovery tool that hands you a finding and walks away.