AI Security

PCI DSS 4.0 Alignment: Griffin AI vs Mythos

PCI DSS 4.0 raised the evidence bar for software security, supplier management, and continuous assurance. Griffin AI meets the new requirements with persisted records. Mythos-class pure-LLM tools leave QSAs asking for artifacts.

Nayan Dey
Security Engineer
7 min read

PCI DSS 4.0 went into effect as the mandatory version on March 31, 2024, with a subset of new requirements deferred to March 31, 2025. The updated standard did several things at once. It introduced the Customized Approach for flexible control implementation. It tightened software development and supplier management expectations. It expanded continuous assurance rather than point-in-time testing. And it forced assessed entities to think carefully about how their security tooling produces evidence that a Qualified Security Assessor can accept.

This is where the choice between Griffin AI and Mythos-class pure-LLM tools becomes operationally decisive. Griffin, the compliance-aware agent inside Safeguard, produces records shaped for QSA review. Mythos-class tools, which wrap their capabilities in natural-language output, produce artifacts that a QSA cannot directly sample from.

Requirement 6 and the software security shift

Requirement 6 in PCI DSS 4.0 is where the software security expectations live, and it received substantial rewrites. Requirement 6.2 addresses secure development. Requirement 6.3 addresses security vulnerabilities in bespoke and custom software. Requirement 6.4 addresses public-facing web applications. Requirement 6.5 addresses changes to system components.

The new sub-requirements ask, with uncomfortable specificity, for evidence that the organization identifies vulnerabilities in software at a defined frequency, prioritizes them based on risk, and remediates them in a timely manner. They ask for evidence that secure coding training is refreshed and tracked. They ask for evidence that the software engineering function has mechanisms to detect and prevent common classes of vulnerabilities during development.

Griffin AI addresses these through its continuous vulnerability pipeline and its policy gate infrastructure. For Requirement 6.3.1, the evidence is a queryable vulnerability inventory with severity, detection date, and remediation status for each item. For Requirement 6.3.2, the evidence is the bespoke software inventory and the mechanism that identifies vulnerabilities within it, which in Safeguard is the per-repository scan configuration. For Requirement 6.4.1, the evidence is the pre-deployment gate evaluation against public-facing applications. For Requirement 6.5.1, the evidence is the change management gate that evaluates pull requests.

Each of these is a sampling target. The QSA picks a window, asks for the population, and samples from it. Griffin emits the population as a natural part of operation.

The Mythos-class gap in Requirement 6

Mythos-class tools can discuss Requirement 6 fluently. They can identify vulnerabilities in a codebase, explain the fix, and even draft a remediation plan. What they do not produce is a persistent record that the QSA can sample.

The specific problem is that PCI DSS evidence has to demonstrate consistency over a period. A single demonstration that the tool identified a vulnerability does not establish that the process operates effectively month after month. The QSA wants to see, for the window, the list of vulnerabilities identified, the prioritization decisions made, the remediations shipped, and the retests completed. That list is an artifact. It is either maintained by the tool or reconstructed by humans, and reconstruction is where the process breaks down.

Requirement 12 and the supplier management expansion

Requirement 12 received meaningful updates as well. Requirement 12.8 covers service providers. Requirement 12.9 covers the service provider's own attestation to the customer. Requirement 12.10 covers incident response. Requirement 12.11 is the targeted risk analysis, which is new thinking introduced by the Customized Approach and the extended risk methodology.

For the organization that consumes services from third-party providers, 12.8 expects maintained lists, due diligence records, and ongoing monitoring. Griffin AI's supplier posture model feeds this directly for software dependencies. When a third-party library changes ownership, fails to ship a security update in a reasonable period, or starts showing signs of project abandonment, Griffin flags the supplier and updates the posture record. The ongoing monitoring evidence is the per-supplier posture over time.

Mythos-class tools do not maintain a supplier ledger. They can reason about supplier risks in conversation, but they do not hold state. When the QSA asks for the 12.8.1 list of service providers with a cardholder data impact, a pure-LLM tool is not the answer source.

Continuous assurance and the audit log

PCI DSS 4.0 pushes entities toward continuous assurance rather than periodic testing. Several requirements mandate defined frequencies and named responsibilities, but the spirit of 4.0 is that the entity maintains its control posture continuously and that the assessment confirms it rather than establishes it.

Requirement 10 covers logging. Requirement 10.2, 10.3, and 10.4 address log content, protection, and review. Griffin AI contributes by producing its own audit log of tool actions, policy decisions, and operator activity. That log is an input to the entity's broader logging posture and provides assurance about the security tooling itself.

Mythos-class tools often do not produce audit logs of their reasoning. A conversation is a transient interaction between a user and a model. There is no durable record of what the tool decided and why. When the QSA asks how the assessed entity knows the security tool was used correctly during the period, the entity with a Mythos-class tool has to defer to external logs that may or may not capture the relevant interactions.

The Customized Approach and the evidence of equivalence

The Customized Approach lets an assessed entity meet a requirement through a control other than the one stated in the Defined Approach, provided the entity can demonstrate that the custom control meets the stated objective. The demonstration requires a Targeted Risk Analysis, a Customized Approach Validation Document, and a Controls Matrix.

This is a documentation-heavy path by design. Griffin AI is useful here because its structured data model is exportable into the formats the Customized Approach documents expect. Risk analyses reference the supplier posture and the vulnerability inventory. Validation documents reference the policy gates and their histories. Controls matrices map the Griffin-emitted evidence to the control objectives.

Mythos-class tools can produce prose that reads like a Customized Approach document, but the underlying evidence is still missing. A QSA evaluating a Customized Approach submission will check the cross-references, and a cross-reference that lands on prose rather than a signed record will be queried.

Incident response and the 6-hour disclosure window

Requirement 12.10 addresses incident response, and sub-requirement 12.10.5 expects monitoring and response to security control failures within a defined timeframe. For payment systems, that timeframe is often measured in hours.

Griffin AI's anomaly detection and policy violation monitoring create events with timestamps and handling histories. The time-from-detection-to-response is measurable and reportable. Incident response reviewers can walk from the initial detection to the containment decision to the resolution, with each step signed and timestamped.

Mythos-class tools can help analysts think about an incident in progress, but they do not produce the per-incident record that the 12.10 evidence expects. A chat transcript is not an incident response record; it is a side conversation. The gap shows up in post-incident review and in the next assessment.

Third-party involvement and the RoC

The Report on Compliance submitted by the QSA after the assessment summarizes evidence tested. RoCs are reviewed by acquirers, payment brands, and sometimes regulators. A RoC that leans heavily on narrative evidence invites follow-up questions. A RoC built on structured artifacts closes them.

Griffin AI-produced evidence tends to make RoCs cleaner. Each sampled control maps to a finite set of artifacts with consistent format. The QSA writes the RoC faster, and the acquirer reviews it without surprises.

Mythos-class tools can make the RoC harder to write because the evidence is inconsistent in shape and availability. The QSA has to either reject the tool's output as evidence or describe how they attempted to verify the narrative, which extends the engagement and sometimes leads to compensating control discussions that should not have been necessary.

The practical recommendation

PCI DSS 4.0 is the most mature prescriptive standard in the supply chain area, and the new version's evidence bar is higher than the previous one. Griffin AI is built to clear that bar. Mythos-class pure-LLM tools are helpful for development-time reasoning, but they are not an evidence source. When the QSA walks in with the sampling plan, the entity that built on Griffin has artifacts to show. The entity that built on Mythos-class tools has conversations to describe. The difference translates directly into assessment cost and RoC quality.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.