AI Security

ISO 27001 Mapping: Griffin AI vs Mythos

ISO 27001 Annex A has 93 controls in the 2022 revision, each needing documented evidence. Griffin AI emits records that map cleanly. Mythos-class pure-LLM tools force control owners to narrate.

Shadab Khan
Security Engineer
7 min read

The 2022 revision of ISO/IEC 27001 reorganized Annex A into 93 controls across four themes: Organizational, People, Physical, and Technological. The move consolidated older controls and added new ones in areas that had been growing in importance, including threat intelligence, information security for cloud services, secure coding, and web filtering. For any organization that has gone through a recertification audit under the new revision, the practical effect is that the Statement of Applicability now demands fresher and more specific evidence than the 2013 version ever did.

This revision is where the two categories of AI security tooling we have been comparing produce visibly different outcomes. Griffin AI, the compliance-aware agent inside Safeguard, produces records that map directly to the new control taxonomy. Mythos-class pure-LLM tools, which answer questions in natural language but do not persist structured decisions, leave control owners in the uncomfortable position of describing their controls without being able to demonstrate them.

The Statement of Applicability as an evidence list

An ISMS certified against ISO 27001:2022 is judged in part on the quality of its Statement of Applicability. For every Annex A control, the SoA must indicate whether the control applies, how it is implemented, and where the evidence of its operation lives. During the certification audit, and during subsequent surveillance audits, the assessor will sample across the SoA and request the evidence.

This is where a tool that emits structured, signed records earns its keep. The SoA line for control 8.25 (Secure development lifecycle) expects, among other things, evidence that the organization has defined secure coding standards, that code is reviewed before merge, that dependencies are evaluated, and that vulnerabilities identified during development are tracked through remediation. For each of these, the assessor wants to see an artifact.

Griffin AI against the Annex A themes

Griffin AI covers the technological and organizational themes most densely because those are where its operational data lives, but the coverage is designed explicitly against the 2022 taxonomy.

In the Organizational theme, control 5.19 (Information security in supplier relationships) and 5.21 (Managing information security in the ICT supply chain) are strongly supported. Griffin maintains supplier posture data for every third-party component that appears in a scanned product. Each supplier carries signature trust, known-malicious history, maintenance activity, and license posture. When the auditor samples 5.21, the evidence is an exported supplier risk report with per-supplier records and source traceability.

In the Technological theme, the match is denser. Control 8.8 (Management of technical vulnerabilities) maps to Griffin's continuous vulnerability pipeline. Control 8.9 (Configuration management) maps to the policy gate configuration and its version history. Control 8.25 (Secure development lifecycle) maps to the union of Griffin's pre-merge gates, build-time scans, and post-deployment monitoring. Control 8.26 (Application security requirements) maps to the policy bundle that defines what a product must satisfy before it ships. Control 8.28 (Secure coding) is partially supported, in that Griffin detects categories of insecure code patterns and records the enforcement decisions.

Control 8.16 (Monitoring activities) benefits from the continuous nature of Griffin's event stream. The population of monitoring events during any window is queryable, and each event has the full provenance chain attached.

Where Mythos-class tools struggle

The 2022 revision made evidence demands harder in practice because the controls are more specific. The older control 14.2.1 (Secure development policy) was broad enough to be satisfied by a policy document and a general narrative. The new control 8.25 expects evidence across the entire lifecycle, from planning through deployment and maintenance. A narrative alone does not survive the sampling.

Mythos-class pure-LLM tools can produce narratives about the lifecycle with fluency. They can discuss how the organization reviews code, manages dependencies, and handles vulnerabilities. What they cannot do is produce the underlying records. When the auditor samples, "Show me the twelve most recent merges to the production branch that touched a security-relevant path, and for each, show me the pre-merge gate decision," the Mythos-class tool has nothing to present. It does not keep merges and gate decisions as records.

The same limitation appears in control 5.21. When the auditor asks, "Show me the onboarding evidence for the three suppliers most recently added to the ICT supply chain and the ongoing monitoring for each," a narrative cannot satisfy the sample. The evidence must be a record. Griffin produces one; Mythos-class tools do not.

The Statement of Applicability as a living document

ISO 27001 is explicit that the ISMS must be maintained over time, not merely stood up and certified. Surveillance audits, usually annual, check whether the ISMS is still operating as designed. Significant changes, whether in the scope, the threat landscape, or the organization, are supposed to trigger updates to the SoA and the accompanying evidence.

Griffin AI makes this continuous posture tractable. Policy changes emit versioned records. Gate configuration changes are tracked. New controls introduced during the year accumulate evidence naturally as they operate. The SoA update at the next surveillance audit is a collation task rather than a reconstruction task.

Mythos-class tools place a heavier burden on the control owner. Because the tool itself is not a record of what happened, the owner must maintain a parallel documentation stream: tickets, screenshots, and narratives. Between surveillance audits, the parallel stream tends to thin out, and the next audit's evidence gathering becomes a scramble.

Risk treatment and the residual risk log

An ISMS is organized around risk treatment. Risks are identified, evaluated, and treated through controls selected from Annex A or from other sources. Each treated risk has a residual risk that must be accepted by the appropriate level of management. The residual risk log is one of the first documents a 27001 auditor asks for.

Griffin AI supports risk treatment through findings that carry risk metadata: severity, reachability, business impact, and likelihood. Treatment decisions, whether remediate, mitigate, accept, or transfer, are captured as state transitions with owner and rationale. The residual risk posture for a product is derivable from the current state of its findings.

Mythos-class tools can help an analyst think about risk, but they do not maintain a residual risk ledger. That gap shows up in the risk treatment review, where the auditor expects a consistent record of decisions and their approvers.

Transition considerations for mid-cycle organizations

Organizations certified under ISO 27001:2013 were required to transition to the 2022 revision by October 2025. Organizations still in their first full cycle under the new revision are discovering that Annex A's increased specificity surfaces tooling gaps that were easier to hide previously. The control 8.9 (Configuration management) expectation, in particular, catches organizations whose security tooling does not track configuration history.

Griffin AI's policy versioning is directly useful here. Every policy in force during the audit window is versioned, and the version under which each decision was made is part of the decision record. Control 8.9 evidence becomes a sampling against policy versions and decisions.

Mythos-class tools, lacking a configuration model for their own behavior, create a paradox. They are themselves part of the security stack, but the auditor cannot assess whether the tool behaved consistently during the window because the tool does not produce a configuration artifact that can be versioned.

The practical recommendation

ISO 27001:2022 certification is a commitment to producing evidence across 93 controls on an ongoing basis. The tools in the security stack need to contribute evidence, not narrative. Griffin AI emits records that the assessor can sample against the Annex A taxonomy directly. Mythos-class pure-LLM tools are helpful for reasoning through security questions, but they do not replace the structured event stream the ISMS needs. Use them together if it suits your workflow, but expect the evidence base of the SoA to come from Griffin, not from Mythos.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.