AI Security

AI Models in Your Supply Chain: The Security Risks Nobody Talks About

AI/ML models are the new open source libraries. Here's why your supply chain security strategy needs to account for model provenance, poisoning, and compliance.

Safeguard Team
Research
3 min read

AI Models Are Supply Chain Components

Every time your application calls an AI model — whether it's a locally deployed LLM, a fine-tuned classifier, or a cloud API — you're consuming a supply chain component. And just like traditional software dependencies, these components carry risk.

Yet most organizations treat AI models differently from their other dependencies. They don't track model versions, don't verify model provenance, and don't monitor for model-specific vulnerabilities.

This gap is becoming critical as AI adoption accelerates.

The Risk Landscape

Model Poisoning

Training data poisoning allows attackers to embed malicious behaviors in models that appear to function normally:

  • Backdoor attacks — Models produce correct outputs for most inputs but behave maliciously for specific trigger patterns
  • Data poisoning — Corrupted training data degrades model accuracy over time
  • Fine-tuning attacks — Compromised fine-tuning datasets inject vulnerabilities into otherwise safe base models

Provenance Gaps

Unlike software packages with registries and checksums, model distribution lacks standardized provenance:

  • Where was the model trained?
  • What data was used?
  • Who modified it and when?
  • Has the model been tampered with since publication?

Compliance Requirements

Emerging regulations are beginning to address AI supply chain risks:

  • EU AI Act requires transparency about model origins and training data
  • NIST AI RMF provides a framework for AI risk management
  • Executive orders are expanding supply chain requirements to include AI components

Securing Your AI Supply Chain

Track Model Dependencies

Treat models like any other dependency:

  • Maintain an inventory of all AI models in use
  • Record model versions, sources, and checksums
  • Track which applications depend on which models
  • Monitor for known model vulnerabilities

Verify Provenance

Before deploying any model:

  • Verify the model's source and publication chain
  • Check for signed model cards or attestations
  • Validate checksums against published values
  • Review training data documentation

Monitor for Drift

Models change over time, and so do their risk profiles:

  • Track model performance metrics for unexpected changes
  • Monitor for distribution shift in model outputs
  • Alert on model behavior anomalies
  • Maintain rollback capability for all deployed models

Building an AI SBOM

The concept of an SBOM is expanding to include AI components. An AI SBOM should capture:

  1. Model identity — Name, version, publisher, license
  2. Training data — Sources, preprocessing steps, data cards
  3. Architecture — Model type, parameters, framework
  4. Dependencies — Runtime libraries, hardware requirements
  5. Evaluation — Performance benchmarks, bias assessments
  6. Provenance — Training pipeline, fine-tuning history, attestations

Safeguard.sh is building AI supply chain visibility into our platform, treating model dependencies with the same rigor as traditional software components.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.