Use Case · Zero-Day Discovery

Find Zero-Days Pattern Scanners Miss

Pattern-matching scanners can only find vulnerabilities someone has already reported. Safeguard's engine plus Griffin AI traces taint across package boundaries and hypothesises exploit conditions — surfacing candidate zero-days in your dependency graph before they are published anywhere else.

81%
Exploit Hypothesis Accuracy
10k+
Packages Continuously Analysed
<1h
Candidate Review Triage
98%
Adversarial Prompt Resistance

Why Pattern Scanners Never Find Them

Traditional SCA tools are looking for vulnerabilities that already have CVEs. A zero-day, by definition, doesn't.

01

CVE-Based Detection Is Inherently Reactive

Pattern scanners match your dependencies against a database of known vulnerabilities. If a vulnerability hasn't been disclosed, there's no pattern to match. You find out about the zero-day when the attacker uses it on someone else.

02

Cross-Package Taint Flows Are Invisible

Your HTTP handler in app.js reaches a sink in a transitive dependency 7 hops deep. No single file contains the full vulnerability. Pure static analysis in one package misses the flow that crosses package boundaries.

03

Pure-LLM Bug-Hunters Have 60-95% False Positive Rates

LLMs asked to 'find vulnerabilities' flag every risky-looking sink. Without structured context about reachability and exploit conditions, the output is unusable at production scale.

04

Triage Capacity Is Already Saturated

Security teams drown in thousands of low-signal findings. Adding more noise to the queue — even if some of it is signal — doesn't help. What's needed is high-precision candidates, not high-volume alerts.

The Engine-Plus-LLM Pipeline

The Engine Finds The Shape. The Model Hypothesises The Exploit.

Stage 1 — Taint Analysis

The deterministic engine walks the cross-package call graph, propagates taint from every source type to every sink, and surfaces every path that survives existing sanitizers.

Cross-package call graph construction
Version-aware symbol resolution
Sanitizer-aware propagation

Stage 2 — Griffin AI Hypothesis

For each surviving path, Griffin AI receives a structured brief (source, sink, intermediate code, version) and hypothesises the exploit class, trigger input, and CVE mapping — if one exists.

CWE classification
Input hypothesis
Known-CVE deduplication

Stage 3 — Disproof & Verification

A second model pass tries to disprove the hypothesis. Only candidates that survive the adversarial check reach the review queue — keeping precision high and triage hours low.

Adversarial disproof
Ranked evidence
Coordinated disclosure workflow
Engine Output

How A Candidate Zero-Day Reaches Your Triage Queue

Taint analysis surfaces a path from HTTP request body into a deserialization sink 6 packages deep. Griffin AI hypothesises an unsafe-deserialization pattern and maps it to a known CWE. The disproof pass attempts to refute the hypothesis with specific mitigation conditions; it cannot. The finding reaches your queue with the full taint path, exploit hypothesis, disproof attempt, and a ranked evidence bundle. If you opt in, Safeguard runs coordinated disclosure with upstream maintainers on confirmed candidates.

6-hop
Cross-Package Taint
~8 min
Pipeline Latency
1-line
Remediation Suggestion

Find What's Hiding In Your Dependencies.

Run the engine against your codebase and see the candidate zero-days your pattern scanner never surfaced.