AI Security

Data Residency Controls: Griffin AI vs Mythos

Data residency is no longer a procurement checkbox. It is an architectural property that most pure-LLM vendors cannot deliver without major rework.

Nayan Dey
Staff Platform Engineer
7 min read

Data Residency Controls: Griffin AI vs Mythos

Data residency used to be a line in a vendor questionnaire. Today it is a board-level concern, a national policy lever, and an increasingly common cause of failed vendor selections. Regulators in the EU, UK, Canada, Australia, the UAE, India, Japan, and Brazil have each moved to codify where sensitive data may be stored and processed. Enterprise buyers have followed, adding residency requirements to master agreements as a default.

For a security AI platform, the residency question is especially pointed. The inputs to Griffin AI include SBOMs, vulnerability backlogs, source repository metadata, incident timelines, and personnel-linked audit data. These are the crown jewels of a security program, and their location matters in ways that extend well beyond privacy law.

This post examines what residency actually requires, how Safeguard implements it for Griffin AI, and why Mythos-class pure-LLM competitors consistently struggle to meet the bar.

What "data residency" means in practice

A useful residency model distinguishes three data classes and asks where each lives:

  • Data at rest. Persistent storage of customer records, configuration, conversation history, and audit logs.
  • Data in transit. Bytes moving between the customer, the product's control plane, and any downstream services.
  • Data in use. The transient memory of a running inference request, including the prompt, the retrieved context, and the intermediate state of tool calls.

Residency rules vary, but the most demanding regimes require all three classes to remain inside a defined jurisdiction, often down to a specific region or even a specific data center. "Stored in region" is no longer sufficient when inference routes through a global fleet.

Why Mythos-class vendors find this hard

Pure-LLM security assistants built on centralized inference have a structural residency problem. Their economics depend on concentrating GPU capacity, and their capacity is concentrated in the regions where it is cheapest and most available. That is rarely the region the customer's lawyer requires.

Three patterns recur:

  • Global inference routing. A request submitted from Frankfurt may be served by capacity in Ohio. The marketing page says "regional," the traceroute says otherwise. Vendors address this with contractual commitments rather than network controls, which regulators increasingly consider insufficient.
  • Telemetry and safety filters. Even when the primary inference is regional, secondary services for logging, moderation, abuse detection, and feature-flag evaluation frequently route globally. These services see prompt content, which means prompt content has crossed the boundary.
  • Vendor-operated control plane. Authentication, feature configuration, and tenant metadata often live in a single global control plane. Customer configuration, which often embeds sensitive labels, sits outside the residency boundary.

A vendor can address each of these with engineering effort. The problem is that each addresses a symptom rather than the cause, and the cause is that the product was built to run centrally.

How Safeguard handles residency

Safeguard was designed to run wherever the customer needs it to run. The SaaS offering is available in multiple regions with strict network isolation; the on-prem offering runs entirely inside the customer's infrastructure; the air-gapped offering runs in the customer's enclave. Griffin AI participates in all three modes with the same capabilities.

A few design choices carry the weight:

  • Regional SaaS tenants are independent. Each region is a separate Safeguard deployment with its own control plane, its own data stores, and its own inference capacity. There is no cross-region data path at runtime.
  • Inference stays local. Griffin AI's models run on capacity inside the tenant's region. A prompt issued in the EU tenant is served by EU capacity against EU data. There is no fallback path to another region.
  • Telemetry is minimal and regional. Operational telemetry is limited to what the region itself needs to run. No customer content, prompt content, or retrieved context is shipped outside the region.
  • On-prem is the definitive answer. For customers whose rules do not accept any vendor operation of the tenant, Safeguard on-prem places every component under customer control. Residency becomes a property the customer configures directly.

Residency at the AI layer

The AI layer deserves specific attention because it is where many products leak data across a boundary in ways customers do not notice.

A prompt to Griffin AI carries three streams of content: the user's message, the retrieved context pulled from the Safeguard database, and the tool-call outputs that enrich the answer. A faithful residency implementation requires all three to remain inside the boundary for the lifetime of the request.

Safeguard achieves this by running Griffin AI's retrieval pipeline, model inference, and tool execution inside the same tenant as the data. The retrieval layer reads from local indices, the model reads from local weights, and tool calls route to local integrations. A Griffin AI request and its response never traverse a region edge.

Mythos-class vendors that outsource embedding generation, moderation, or tool execution to shared services cannot make the same claim. The prompt, the embeddings, or the tool outputs typically leave the region at some point in the lifecycle. Customers who have asked for traces have not always liked what they found.

Sovereignty, not just residency

Residency is a geographic question. Sovereignty is a jurisdictional one. A dataset physically stored in Frankfurt can still fall under the legal reach of a foreign regulator if the operator is subject to that regulator's authority. For a growing set of European, Middle Eastern, and Asia-Pacific customers, sovereignty, not just residency, is the requirement.

Safeguard addresses sovereignty by offering customer-hosted and partner-hosted deployments. The customer's own legal entity, or a local partner's legal entity, operates the tenant. The vendor's role is product delivery, not operations. Griffin AI runs under the local operator's control, with local logs, local keys, and local access.

This is a strong answer to a hard question, and it is available because Safeguard was architected to be operable by parties other than its maker. Mythos-class vendors whose business model depends on operating the product themselves have no equivalent offering.

Keys, logs, and the rest

Residency is not only about the primary data store. A credible residency posture addresses:

  • Encryption keys. Managed in the region, rooted in a regional KMS, with optional customer-managed keys held in the customer's own HSM.
  • Backups. Stored in the region, with restore tested regionally.
  • Audit logs. Written in the region, shippable to the customer's SIEM in the region.
  • Conversation history. Retained in the region under the customer's retention policy, with the option to disable retention entirely.
  • Exports. Produced in the region, delivered over channels that do not cross the boundary.

Safeguard handles each of these. The settings are exposed in the product, not buried in a contract addendum.

The procurement test

A short diligence pass will separate vendors that have done the work from vendors that have written it on a slide:

  • Is inference served exclusively by capacity in the customer's region?
  • Do telemetry, moderation, and feature-flag services all stay in-region?
  • Are encryption keys regional, with optional customer-managed key support?
  • Do conversation history, embeddings, and tool outputs remain in the region for the entire request lifecycle?
  • Is a customer-hosted or partner-hosted deployment available for sovereignty-sensitive cases?

Griffin AI inside Safeguard passes the test. Pure-LLM competitors routinely fail at least two of these questions under close examination.

Closing

Residency has moved from fine print to first principle. An enterprise security AI that cannot guarantee where its inference runs, where its logs live, and where its prompts travel is not an enterprise security AI. Safeguard was built with residency as a design constraint, which is why Griffin AI can deliver it in SaaS, on-prem, and sovereign configurations without caveat. For customers whose regulators, lawyers, and boards now ask the residency question first, that difference is the decision.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.