AI Security

CMMC Pass-Through: Griffin AI vs Mythos

CMMC 2.0 rollout has made flow-down expectations concrete. AI-for-security tools used by DIB contractors are in scope, and the pass-through story matters.

Nayan Dey
Senior Security Engineer
4 min read

CMMC 2.0 enforcement ramped through 2025 and is baseline for DIB contracts in 2026. Level 2 and Level 3 contractors have concrete expectations for how AI-for-security tools fit into their CUI-handling environments. The flow-down requirements have sharpened: vendors whose tools touch CUI-adjacent data inherit parts of the customer's CMMC obligations. Griffin AI and Mythos-class general-purpose AI-for-security tools have very different stories here, and the difference shows up during the contract-clause negotiation rather than during the demo.

What CMMC expects of AI-for-security tooling

Five concrete asks from DIB customers operating at Level 2 or Level 3:

  • Deployment model. SaaS is acceptable with an appropriate ATO or FedRAMP inheritance; many DIB customers prefer on-prem or government-cloud deployment.
  • Data residency. CUI must remain in US infrastructure with documented access controls.
  • Incident response commitments. 72-hour notification; cooperation with customer IR processes.
  • Subcontractor management. Every downstream SaaS dependency of the vendor is in scope for DIB flow-down.
  • Evidence generation. SSP (System Security Plan) content, POA&M support, continuous monitoring artifacts.

Each is negotiable. Each needs a concrete answer during procurement.

Where Griffin AI's architecture helps

Three direct mappings:

On-prem and air-gapped deployment. Safeguard ships in SaaS, on-premises, and air-gapped forms. The on-prem deployment runs on customer-controlled infrastructure with customer-controlled frontier-model access (bring-your-own private model deployment). CMMC Level 3 customers can deploy without introducing new external dependencies.

Scoped data handling. The engine processes code and configuration locally. Griffin AI's LLM calls are customer-controlled — routed through customer-approved model endpoints with customer-scoped retention policies. CUI does not leave the customer perimeter without explicit routing configuration.

SSP and POA&M alignment. The platform produces evidence that maps to NIST 800-171 controls referenced in CMMC. Vulnerability management, access controls, audit logging, and incident response evidence are available as exports aligned to the customer's SSP structure.

Where Mythos-class tools struggle

Three common gaps:

SaaS-only deployment. Many Mythos-class tools run only as SaaS with the vendor's infrastructure and the vendor's model provider. CUI handling requires either a FedRAMP-authorised deployment (which many general-purpose AI-for-security vendors don't yet have) or exclusion from CUI-handling workflows.

Model vendor chain opacity. When the tool calls a frontier model, the frontier model vendor is a subcontractor in flow-down terms. Documenting that subcontractor's CMMC posture is part of the customer's scope.

Evidence generation gap. Tools not designed with NIST 800-171 alignment produce evidence in the wrong shape. Customers copy-paste data into the right format manually.

A concrete contract conversation

A DIB prime contractor at Level 3 evaluates an AI-for-security vendor. The contracts team asks:

  1. Where is the vendor's infrastructure located? US? GovCloud?
  2. Which frontier model does the vendor call? Where is that model hosted?
  3. How is CUI prevented from flowing to the model vendor's systems?
  4. What is the vendor's CMMC assessment status? Third-party or self-assessment?
  5. Does the vendor flow down CMMC clauses to its own subcontractors?

Griffin AI's on-prem deployment model answers (1) and (2) cleanly. (3) is a configuration choice the customer controls. (4) and (5) are contractual and require vendor-specific answers but are achievable.

Mythos-class SaaS tools without GovCloud deployment fail (1) and (2) immediately. The conversation stops before the demo.

The sub-supplier chain

CMMC flow-down extends to the vendor's suppliers. An AI-for-security vendor whose model runs on a public frontier API has that API provider in the flow-down chain. The customer has to document the provider's CMMC or equivalent posture.

Customer-controlled model deployment closes this — the frontier model runs on customer infrastructure, and there is no external AI API in the chain.

What to evaluate

Three concrete checks during procurement:

  1. Demonstrate deployment in a GovCloud or on-prem environment without external API calls.
  2. Show the vendor's CMMC attestation or third-party assessment status in writing.
  3. Walk through the SSP-aligned evidence exports for a sample NIST 800-171 control.

The answers determine whether the vendor is DIB-ready or DIB-ambitious.

How Safeguard Helps

Safeguard's on-prem and air-gapped deployments remove the external-dependency chain that makes CMMC compliance hard for SaaS AI-for-security vendors. Customer-controlled frontier model endpoints keep CUI in the customer perimeter. Evidence exports align to NIST 800-171 controls and slot into SSP and POA&M structures directly. For DIB contractors whose CMMC posture affects contract eligibility, Safeguard is built to be the AI-for-security layer that fits rather than the layer that complicates.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.