Data residency for AI workloads was a preference in 2023. By 2026 it is a contractual requirement in most regulated-industry deployments and many European deployments. The EU Data Act, India's Digital Personal Data Protection Act, sector regulations in financial services and healthcare, and national-security deployments all impose specific residency rules. AI-for-security procurement now has to answer data-residency questions before it can advance.
What residency actually requires
Four dimensions:
- Where model inference runs. Physical location of the GPUs.
- Where prompt and response data is stored. Logging, caching, audit.
- Where metadata lives. Access logs, usage metrics, billing data.
- Where backups sit. Often cross-region; often overlooked.
Each has its own rules under the applicable framework.
Where Griffin AI sits
Three deployment options:
- Regional SaaS. Safeguard operates in specific regions; customers select the region that matches their residency requirements.
- Private endpoint. Customer-controlled model endpoint (e.g., AWS Bedrock in a specific region) avoids the vendor's region entirely.
- On-premises. Residency is local by construction.
Each option produces specific residency documentation the customer can provide to regulators.
What to evaluate
Three questions:
- What regions does the vendor operate in, and can the customer select theirs?
- For each dimension (inference, storage, metadata, backup), where does the data go?
- What contractual commitments cover residency vs what are vendor assurances only?
How Safeguard Helps
Safeguard's residency documentation covers all four dimensions across deployment options. Customers get the specific artefacts their regulators ask for. For enterprise AI-for-security procurement under residency constraints, this clarity separates finalist-grade vendors from also-rans.