Best Practices

Coordinated Disclosure Zero-Day Playbook

A playbook for coordinated disclosure of zero-day vulnerabilities, covering timelines, stakeholder management, embargo discipline, and the judgement calls in between.

Nayan Dey
Senior Security Engineer
8 min read

Coordinated disclosure is one of those practices that looks orderly on paper and turns messy the moment a live zero-day is on the table. A vendor does not answer email. A reporter threatens to publish. A downstream consumer finds out through a side channel. The careful ninety-day schedule everyone nodded at in the kickoff meeting evaporates in the first week.

This playbook is what we reach for when one of these lands. It assumes a vulnerability that is genuinely novel, genuinely impactful, and not yet public. Most of what follows applies equally to a finding you made yourself and to one a researcher brought to you. The asymmetries are where the judgement calls live.

First Hour: Confirm, Contain Knowledge, Triage

The first hour after a zero-day lands matters more than the next week. The work is not remediation — there is no fix yet — it is establishing a reliable foundation for everything that follows.

Confirm the finding. Reproduce it in a controlled environment. Zero-days that turn out to be configuration issues, already-public bugs, or plain bugs that happen to look severe waste everyone's time. The first reproduction, ideally by a different engineer than the reporter, is non-negotiable.

Contain knowledge. The set of people who know about the vulnerability should be kept small and intentional. Not because secrecy is inherently virtuous, but because every person who knows is a risk of leak, accidental disclosure, or unintended action. A short disclosure distribution list, named individuals, agreed to explicitly. Avoid broad channels (shared chat rooms, open tickets, team mailing lists) until the playbook moves to a later phase.

Triage severity and exploitability. CVSS vectors are useful but not sufficient. The questions that drive downstream decisions: is this remotely exploitable without authentication; is the affected surface internet-facing; how many customers or users are affected; is there evidence of active exploitation in the wild. The answers determine the timeline.

Stakeholder Map

A coordinated disclosure has more stakeholders than any other kind of security incident. Mapping them in advance, not during the crisis, saves time.

Affected vendors. The software vendors whose products contain the vulnerability. If there is more than one (common in supply-chain cases), coordinating across them is half the work.

Downstream consumers. Teams, companies, or ecosystems that depend on the affected software and need advance warning or coordinated patching. For widely-used components, this list can be enormous. Prioritisation is needed.

CERT/CSIRT organisations. Regional or sectoral coordinators who can help broker disclosure when direct vendor contact fails or when the vulnerability affects national-scale infrastructure.

Researchers or reporters. External parties who may have discovered the bug independently, either through their own research or through coincidence. They have their own interests and timelines and are often the least predictable variable.

Regulators and legal. Depending on jurisdiction and industry, legal obligations may apply (breach disclosure, critical infrastructure notifications). This should be a phone call with counsel, not a guess from the security team.

Internal communications, customer support, sales. At some point the vulnerability becomes public. The teams who will talk to customers, press, and prospects need advance notice to prepare.

The map is not a generic template. Build it for the specific vulnerability at hand.

Vendor Engagement: The First Contact

The first contact to an affected vendor is disproportionately important. Vendor disclosure programmes vary wildly in quality. Some have formal PSIRTs with clear intake, PGP keys, and defined response times. Some have a security@ alias that forwards to sales. Some have nothing, and you are emailing engineering managers from their LinkedIn profiles.

A useful first contact includes: a clear statement that this is a security vulnerability disclosure, a proposed disclosure timeline, a request for acknowledgement within a short window (forty-eight to seventy-two hours), and an invitation to move to a secure channel for details. The first contact does not include the vulnerability details. Those come after the vendor acknowledges and provides a secure channel.

If the vendor does not respond within the stated window, escalate. A second email, a LinkedIn outreach to a senior engineer, a CERT/CSIRT-mediated contact. If after multiple attempts the vendor is unresponsive, the playbook enters a harder phase — proceeding with disclosure on a defined timeline without vendor cooperation — which needs explicit sign-off from leadership and legal.

Setting and Defending a Timeline

Ninety days is the industry default for a reason: it is long enough for most vendors to ship a fix and short enough that a determined researcher will not lose patience. But ninety days is not a universal law. Circumstances change the calculation:

Vulnerabilities under active exploitation in the wild need shorter timelines. If attackers are already using the bug, waiting ninety days is letting the attack window stay open.

Vulnerabilities with enormous blast radius (widely-deployed components, critical infrastructure) sometimes merit longer timelines to allow coordinated patching across the ecosystem.

Vulnerabilities where exploitation is theoretical, the attack surface is limited, or the fix is architectural rather than tactical may also need longer timelines, with the caveat that "more time" should come with evidence of active work.

Whatever timeline you set, defend it. Timelines erode when vendors ask for "just a little more time" repeatedly and nobody says no. The discipline is to grant extensions when there is evidence of progress and to hold the line when there is not. This is awkward and occasionally creates friction with partners. It is also how disclosure programmes retain credibility.

Embargo Discipline

During the embargo period — the time between vendor notification and public disclosure — information control is the work. Common failures:

Unmarked internal communications. A zero-day discussed in a shared ticket, a channel that includes contractors, or a meeting invite with a descriptive title has already leaked to a larger audience than intended. Internal communication should be on named channels with access controls matching the sensitivity.

Ambient public signals. Commits with suspicious messages, test cases that hint at the vulnerability, unusual traffic against a staging environment. Adversaries watching a project will notice. Embargoed fixes should be developed in private branches with generic commit messages until the disclosure day.

Overly specific patches. A patch that fixes exactly the vulnerable code path and ships with a detailed changelog is a disclosure in everything but name. Patch packaging, commit messages, and release notes all need to be aware of the embargo.

Third-party discussions. A security conference talk, a blog post, a podcast episode that discusses the vulnerability before disclosure is a breach of embargo. All parties under embargo should have explicit agreement to this in writing.

Disclosure Day

Disclosure day is the easy part if the preceding weeks were disciplined. The releases, advisories, and communications are drafted in advance, reviewed, and triggered on a coordinated schedule. The failure modes are mostly tactical: a timezone miscommunication that has one party publishing before another, a patch release that slipped a day but the advisory went out anyway, a press embargo that breaks earlier than expected.

The mitigations are unglamorous. Written timelines with timezones. Dry-run coordination the day before. Named owners for every artefact. A short shared document that lists the exact sequence of events and their expected times. This is operational work that no senior person wants to do and that everyone is grateful for when it is done.

Post-Disclosure: The Long Tail

A coordinated disclosure does not end when the advisory goes public. The long tail includes:

Customer questions, some of them pointed. Have canned responses prepared and a clear escalation path for questions the canned responses do not cover.

Secondary reports — similar bugs in similar code, or follow-on research by others building on your disclosure. These need their own triage and often their own disclosure cycles.

Retrospective work. What did the process handle well? What broke? The answers become inputs to the next playbook revision. Disclosure playbooks improve by iteration, not by single design.

How Safeguard Helps

Safeguard supports coordinated disclosure end to end, from the first reproduction of a zero-day through post-disclosure monitoring, with templates and automations drawn from disclosure playbooks that have held up in real incidents. The platform provides controlled-access collaboration spaces with named distribution lists, embargo-aware ticketing, and artefact tracking so knowledge containment does not depend on team discipline alone. Disclosure timelines and vendor engagement status are tracked as first-class workflows, with escalation prompts when vendors miss commitment windows. When disclosure day arrives, Safeguard coordinates advisory publication, customer notifications, and downstream consumer alerts from a single timeline, so the careful work of the embargo period is not undone in the final hours.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.