AI Security

Air-Gapped Environments: Griffin AI vs Mythos

Air-gapped AI is not a feature flag. It is an architectural commitment, and it separates serious enterprise products from consumer-grade assistants.

Nayan Dey
Staff Platform Engineer
7 min read

Air-Gapped Environments: Griffin AI vs Mythos

The air gap is the last honest firewall. When a network is physically or logically severed from the public internet, marketing claims meet reality fast. Either the product boots and works, or it does not. For a security AI platform, the air gap is also where the most sensitive data tends to live: classified programs, critical infrastructure operations, defense supply chains, and regulated research.

Griffin AI, shipped as part of Safeguard, runs in air-gapped environments as a first-class deployment mode. Mythos-class pure-LLM competitors, by contrast, treat the air gap as an edge case, a roadmap item, or a carveout negotiated one account at a time. The difference is visible not in a feature list but in the engineering choices that preceded the feature list.

What air-gapped actually requires

A real air-gapped deployment asks the product to survive without:

  • Outbound DNS or network routes
  • External package registries, model hubs, or telemetry endpoints
  • Vendor-hosted authentication or licensing services
  • Third-party safety filters and moderation APIs
  • Any implicit "phone home" for diagnostics or updates

It asks the product to support, instead:

  • Fully offline installation from signed media
  • A deterministic, auditable update path using staged bundles
  • Local identity, secrets, and key management
  • Local model weights with a provenance chain the customer can inspect
  • An evidence trail acceptable to an accreditation authority

Anything short of that is not air-gapped. It is "mostly disconnected," which is a different, weaker posture.

The structural problem for Mythos

Pure-LLM vendors built around hosted inference face a difficult set of questions when a customer requests an air-gapped deployment:

  • Which model runs there? The flagship frontier model is almost always unavailable offline. The offered substitute is usually a smaller, older, or distilled variant, often without the tool-use and reasoning posture of the hosted model.
  • What about safety filters? Many Mythos-class platforms rely on server-side classifiers for prompt and output moderation. Offline, those classifiers either do not run or run in a stripped-down form with quietly different behavior.
  • How do updates work? Hosted products deploy many times a day. An air-gapped customer receives a release every quarter, on media, with signatures to verify. The entire engineering process has to accommodate both cadences.
  • What about the control plane? If authentication, feature flags, or analytics live in the vendor cloud, the offline build needs a parallel control plane. That is a major engineering lift, not a configuration toggle.

Most pure-LLM vendors answer these questions by offering a "limited" offline build with reduced capabilities, or by politely declining the opportunity. Neither is acceptable for an enterprise betting its security program on the product.

How Safeguard ships Griffin AI offline

Safeguard's air-gapped package is the same product as its SaaS and on-prem packages, built from the same source, signed from the same release pipeline. Differences are configuration, not architecture.

The installation bundle is a single signed tarball containing:

  • All container images for the Safeguard platform and Griffin AI
  • Model weights for the supported Griffin AI model set
  • A preloaded vulnerability, advisory, and license corpus with a publication timestamp
  • Helm charts, systemd units, and a bootstrap script for the target platform
  • Signed SBOMs and VEX documents for every shipped component
  • A release manifest with expected hashes and a verification tool

Once the bundle is staged on customer media and moved across the air gap, installation is a local operation. The system comes up without any outbound network call. Authentication integrates with the customer's offline identity provider, typically a local Active Directory, an LDAP deployment, or a Keycloak instance inside the enclave.

Updates across the air gap

The hardest problem in air-gapped operation is not installation. It is keeping the product current without compromising the boundary. Safeguard addresses this with a two-stage release process.

First, the customer obtains a signed update bundle from a delivery channel approved for the enclave. This is typically optical media, a data diode, or a one-way transfer appliance. The bundle contains the new images, the new model weights, the updated corpus, and a delta report describing what changed.

Second, the customer stages the update in a non-production environment, runs an evaluation harness against a held-out set of their own queries, and promotes the update only if the results meet their acceptance criteria. Griffin AI supports this workflow directly: every release includes a regression suite, the evaluation tooling is part of the install, and the model behavior delta is published as part of the release notes.

This is the boring operational machinery that makes air-gapped AI actually work. It is also the machinery that pure-LLM vendors rarely build, because their SaaS business does not require it.

Data never crosses the boundary

Inside the enclave, Griffin AI reads SBOMs, findings, policies, and conversation history from the local Safeguard database. Retrieval-augmented generation happens against local indices. Tool calls route to local MCP servers and local integrations. Nothing Griffin AI processes ever has an opportunity to leave the boundary, because there is no route out.

For customers who need to prove this property rather than assert it, Safeguard ships with a network policy profile that explicitly blocks egress at the pod and namespace level. Operators can run the product under a default-deny posture and watch it continue to work.

Accreditation and evidence

Air-gapped deployments almost always sit under an accreditation regime. FedRAMP HIGH, IL5, IL6, IL7, or sector-specific equivalents demand artifacts that a vendor has to produce, not promise.

Safeguard's release pipeline produces those artifacts by default. Each release carries:

  • Build provenance compatible with SLSA Level 3
  • Signed SBOMs in CycloneDX and SPDX
  • VEX statements covering open findings in shipped dependencies
  • A control coverage matrix mapped to NIST SP 800-53 and CMMC practices
  • A hash manifest for every file in the bundle

Accreditation teams can consume these without waiting on a vendor to answer a questionnaire. The evidence is in the box.

The honest comparison

A Mythos-class pure-LLM assistant can usually be made to do something inside an air gap, given enough time, a smaller model, a services engagement, and a willingness to accept reduced capability. Griffin AI inside Safeguard runs in the air gap as a supported product with the same capabilities as every other deployment mode, with the evidence trail the accreditor expects, on the release cadence the operator can absorb.

That is not a feature difference. It is a product philosophy difference. Safeguard was designed for the places where AI matters most and where the network does not reach. Pure-LLM competitors were designed for the places where the network is always there. Both can be useful. Only one belongs in an enclave.

Closing

Air-gapped AI is a test of architectural honesty. If the product boots, runs, and updates without the vendor's cloud, it is genuinely enterprise-ready. If it needs special builds, reduced capabilities, or a services team to make the deployment work, it is a SaaS product in a disguise. Griffin AI passes the test because Safeguard was built to. For any organization whose most sensitive work happens on the wrong side of an air gap, that distinction is the entire decision.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.