Supply Chain Attacks Targeting AI/ML Pipelines
AI and ML pipelines introduce unique supply chain risks -- from poisoned training data to compromised model registries. Here is what attackers are targeting and how to defend.
Deep dives, practical guides, and incident analyses from engineers who build Safeguard. No fluff, no vendor FUD — just what you need to ship secure software.
AI and ML pipelines introduce unique supply chain risks -- from poisoned training data to compromised model registries. Here is what attackers are targeting and how to defend.
Running an open-weight model inside an enterprise perimeter seems safer than calling a hosted API. It is, and it isn't. The sandboxing patterns that actually produce the safety properties.
The Model Context Protocol is transforming how AI agents interact with tools, but it introduces new attack surfaces. Here is what security teams need to understand.
Anthropic's Model Context Protocol standardizes how AI models interact with external tools. The security implications for software supply chains are significant.
AI-generated deepfakes are making social engineering attacks against software supply chains more convincing and harder to detect.
Automated vulnerability patching sounds ideal until you consider what happens when the automation gets it wrong. Here's a realistic look at autonomous remediation.
Large language models have their own supply chains: training data, fine-tuning datasets, model weights, and serving infrastructure. Each layer introduces risk.
OWASP released its Top 10 for LLM Applications in August 2023, providing the first standardized framework for understanding and mitigating risks in AI-powered software.
Prompt injection attacks against large language models represent a dangerous new frontier in software supply chain security. Here's what defenders need to know.
Weekly insights on software supply chain security, delivered to your inbox.