VEX is the field that separates real vulnerability management from vulnerability theater. A raw scan against an SBOM produces hundreds of CVE matches. A scan enriched with vendor VEX statements produces a filtered list where each finding is either actionable or explicitly excluded with a documented justification. The difference between those two outputs is the difference between an engineering team that ships and an engineering team that drowns. VEX only works if the tool consuming it understands the VEX schema as structured data. Griffin AI does. Mythos-class tools, in my experience, do not — and the downstream consequence is a noisier finding list that wastes engineer attention.
What VEX actually specifies
A VEX statement is a claim about a specific vulnerability in a specific product. The four canonical states are not_affected, affected, fixed, and under_investigation. The not_affected state requires a justification drawn from an enumerated vocabulary: component_not_present, vulnerable_code_not_present, vulnerable_code_not_in_execute_path, vulnerable_code_cannot_be_controlled_by_adversary, inline_mitigations_already_exist. Each justification has formal semantics, and the point of the enumeration is that downstream tools can evaluate claims uniformly. Griffin AI's VEX ingestion parses these states and justifications into typed annotations on the vulnerability nodes in the inventory graph. A Mythos-class tool reads the VEX document as a vendor statement and summarizes it, which loses the enum.
The CSAF 2.0 format and why structured parsing matters
The Common Security Advisory Framework is the format most large vendors use to publish VEX data. CSAF 2.0 documents have a defined schema with product trees, vulnerability sections, and machine-readable scoring. The product tree is how CSAF identifies which products are affected, and it uses a hierarchical naming scheme that requires graph-aware consumption. Griffin AI's CSAF parser walks the product tree and matches branches against the product inventory to identify which statements apply to which of your deployments. A Mythos-class tool reads CSAF as a long JSON file and attempts to pattern-match product names, which fails for any vendor that uses versioned product identifiers or model-number suffixes. The CSAF advisories from Red Hat, SUSE, and Cisco — three of the most prolific VEX publishers — use exactly these identifier schemes.
Justification semantics decide the filter
Not all not_affected justifications are equally strong. component_not_present is ironclad — if the component isn't in your build, the CVE cannot affect you. vulnerable_code_not_in_execute_path is strong but depends on the caller's reachability claim being accurate. inline_mitigations_already_exist is the weakest — it relies on the vendor's assertion that another control blocks the vulnerability, which a rigorous security team will want to verify. Griffin AI exposes justifications as first-class properties, so policy can treat them differently: accept component_not_present automatically, require reachability confirmation for vulnerable_code_not_in_execute_path, flag inline_mitigations_already_exist for engineer review. A pure-LLM tool collapses all justifications into "the vendor says it's fine" and either filters too aggressively or too conservatively. Neither is what a mature program wants.
The OpenVEX format and minimal structured statements
OpenVEX is the lightweight alternative to CSAF — a minimal JSON format with just enough fields to declare a VEX statement. Griffin AI ingests OpenVEX alongside CSAF through the same pipeline, and both feed the same annotation layer on the vulnerability graph. The advantage of structured parsing is format-agnosticism: whether the vendor publishes in CSAF, OpenVEX, or CycloneDX-embedded VEX, the downstream policy sees the same typed annotations. A Mythos-class tool trained on lots of CSAF prose will handle OpenVEX unevenly because the prose patterns differ. Structured ingestion normalizes across formats by design.
VEX at policy time, not at display time
Where VEX is applied matters. Griffin AI applies VEX at policy evaluation time, which means the policy gate that decides whether a PR can merge is already looking at the filtered findings. Vulnerabilities marked not_affected with a strong justification are excluded from the gate's evaluation automatically. Vulnerabilities marked under_investigation are routed to the security team for triage rather than blocking the PR. A Mythos-class tool that only reads VEX at display time — formatting a summary for a dashboard — cannot feed VEX signals into policy enforcement, because the policy layer is looking at the unfiltered finding list. The end result is engineers being blocked by CVEs that have already been declared not applicable.
The supplier responsibility question
VEX statements are published by vendors about their own products. When you consume a vendor's VEX, you are trusting the vendor's claim. Griffin AI records the VEX provenance — who published the statement, when, and with what signature — as part of the annotation. You can filter by trusted publishers, require signature verification, and audit which claims came from which sources. This is a graph property of the annotation. A pure-LLM tool that treats VEX as prose does not track provenance per claim because it does not represent claims individually. When you later ask "why was this CVE filtered out," the prose tool cannot name the statement. Griffin AI can.
The embedded VEX in CycloneDX and SPDX
CycloneDX 1.5 and SPDX 3.0 both support embedded VEX — VEX statements inside the SBOM itself. This is how a vendor ships "here's my product and here's what you don't have to worry about" in a single artifact. Griffin AI reads embedded VEX as annotations on the components in the same document, so the filtering happens automatically on ingestion. A Mythos-class tool reads the SBOM and the VEX as two passages and may or may not correlate them, depending on how the prompt is shaped. The correlation that should be automatic is instead dependent on user phrasing, which is exactly the kind of reliability hole formal inventory ingestion eliminates.
VEX-first workflows for large programs
In programs that manage hundreds of products, the security team cannot hand-triage every CVE match. The only viable workflow is VEX-first: publish internal VEX for every component you've assessed, so the next scan automatically excludes known non-issues. Griffin AI supports this by letting teams author their own VEX in structured form and storing it alongside vendor VEX in the annotation layer. A scan that follows sees both internal and external VEX and applies both. A Mythos-class tool has no persistence layer for internal VEX — every scan starts from the raw finding list and the team has to re-triage. That workflow scales linearly with product count, which is the same as saying it doesn't scale.
The audit question VEX actually answers
When an auditor asks "why was CVE-X not fixed," the answer "it was filtered because of a VEX statement from the vendor dated Y with justification Z" is defensible. The answer "the tool's summary said we don't have to worry about it" is not. Griffin AI produces the first answer because it preserves the structured statement. Mythos-class tools produce the second answer because they preserve the summary. Pick the tool that gives you a defensible paper trail.
The test to run
Import a vendor CSAF advisory that covers a component in your SBOM with a not_affected state and a specific justification. Run a scan with both tools. Check whether the CVE appears in the actionable findings, and if it's filtered, check whether the justification is retrievable. Griffin AI will filter correctly and surface the justification on request. Mythos-class tools will filter unevenly and summarize the justification loosely. The test takes ten minutes and tells you everything you need about the VEX pipeline.
The architectural conclusion
VEX is a typed annotation on a typed vulnerability graph. Griffin AI models it that way. Pure-LLM tools flatten it into prose and lose the filtering power. Formal inventory in, formal outputs out — VEX is the clearest case study for that principle in the entire SBOM ecosystem.