Deploying an open-weight model means inheriting its supply chain. The model weights have a hash. The training data has provenance. The fine-tuning process has a pipeline. Each is part of the AI-BOM you now maintain. For organisations that adopted open weights to reduce vendor dependency, the unexpected outcome is often that they've traded one supply chain for another — and the second one is less well-documented.
What the open-weight supply chain includes
Five components:
- Base model weights. Hash-verifiable; provenance back to the publisher (Meta, Mistral, etc.).
- Training data. Publisher-documented to varying degrees.
- Fine-tuning data and recipes (if you fine-tuned).
- Inference serving stack. vLLM, TGI, or similar — itself a supply chain of open-source code.
- GPU driver and kernel support. Platform-level dependencies.
Each is an attack surface. The 2024 Hugging Face supply chain incidents illustrate the point concretely.
What Griffin AI's model supply chain looks like
Simpler, because the model is managed by a frontier vendor:
- Model. Anthropic's Claude, with published model cards.
- Version pinning. Griffin AI pins specific Claude versions.
- Infrastructure. Anthropic's or the private-endpoint provider's, attested via SOC 2 and similar.
The customer's AI-BOM for Griffin AI is smaller because the vendor owns more of the supply chain.
Where this matters
Three specific scenarios:
- Audit. An auditor asks "what's in the AI-BOM?" A simpler BOM produces a faster audit.
- Incident response. A supply chain compromise of the model vendor is handled by the vendor; a compromise of your in-house AI stack is handled by you.
- Regulatory compliance. EU AI Act documentation obligations are proportional to the scope of what you deploy.
For most customers, "smaller AI supply chain to secure" is a feature, not a limitation.
When owning the full supply chain is worth it
Two cases:
- Classified environments where no vendor relationship is acceptable.
- Research organisations whose value proposition includes model-level innovation.
For most commercial security deployments, neither applies, and the simpler supply chain shape of Griffin AI is preferable.
What to evaluate
Three questions:
- What does your AI-BOM look like under each deployment option?
- Who is responsible for securing each component?
- What happens to each component during an incident involving the model layer?
How Safeguard Helps
Safeguard's Griffin AI inherits Anthropic's model supply chain documentation and adds platform-level attestations on top. The customer's AI-BOM for Griffin AI is documented and minimal. For organisations whose security program is increasingly scoped to include AI-BOM, the simpler supply chain is the architectural property that reduces overall exposure.