Google operates one of the largest software supply chains on earth. Billions of lines of code, hundreds of thousands of dependencies, thousands of engineers committing code daily. When Google talks about supply chain security, they're not theorizing. They're describing what works at a scale most organizations will never reach, and what breaks along the way.
Understanding Google's approach matters not because every company should copy it wholesale, but because the principles behind their systems translate to organizations of any size.
The Monorepo Advantage
Google famously stores nearly all of its code in a single monolithic repository. This is often discussed as an engineering culture choice, but it has profound security implications.
In a monorepo, there's one version of every internal library. When a vulnerability is found in an internal component, fixing it means fixing it once. There's no hunting across dozens of repositories to find every copy of the vulnerable code. The dependency graph is explicit and queryable.
This doesn't eliminate supply chain risk, of course. Third-party dependencies still enter the monorepo, and those carry their own risks. But it dramatically simplifies the question of "where is this code running?" which, as SolarWinds showed, is often the hardest question to answer during an incident.
Most organizations can't adopt a monorepo at Google's scale. But the principle, centralizing dependency management and maintaining a single source of truth for software components, is universally applicable.
Binary Authorization: Trust at the Point of Deployment
Binary Authorization is Google's mechanism for ensuring that only properly built and reviewed code gets deployed to production. It works as an admission control system: before a container image runs in production, Binary Authorization verifies that it was built by an authorized build system, from code that passed required reviews, and hasn't been tampered with since it was built.
The system works through attestations. Each step in the build and review process creates a cryptographically signed attestation. At deployment time, the system checks that all required attestations are present and valid. If any are missing or invalid, the deployment is blocked.
This is the build integrity problem solved at scale. The concept maps directly to what SLSA (which Google created and open-sourced) tries to standardize for the broader industry.
The key insight is that Binary Authorization doesn't trust the binary itself. It trusts the process that created the binary. If the process is well-defined and each step is attested, the output is trustworthy. Break any step in the process and the attestation chain breaks.
Hermetic Builds
Google's builds are hermetic, meaning they depend only on explicitly declared inputs. A hermetic build should produce the same output given the same inputs, regardless of when or where it runs. No network access during builds, no undeclared dependencies sneaking in, no reliance on system state.
Why does this matter for security? Because non-hermetic builds have implicit dependencies. If your build process can reach the internet, it can pull in a compromised package without anyone declaring that dependency. If it depends on system state, that state can be manipulated.
Hermetic builds make the dependency graph explicit and complete. Every input is known, every output is reproducible. This is the foundation that makes Binary Authorization possible. You can't meaningfully attest to a build process if that process has undefined inputs.
Achieving fully hermetic builds is hard. Google invested years of engineering effort. But partial hermeticity still has value. Locking dependencies to specific versions and hashes, restricting network access during builds, and declaring all build inputs explicitly are steps any organization can take.
Dependency Management at Scale
Google's approach to third-party dependencies involves several layers:
Vendoring with review. Third-party code is imported into the monorepo through a controlled process. New dependencies require review and approval. Updates are similarly controlled.
Automated vulnerability scanning. Imported dependencies are continuously scanned for known vulnerabilities. When a vulnerability is found, the relevant teams are notified and patches are prioritized based on severity and exposure.
License compliance. Every third-party component is tracked for license obligations. This isn't directly a security concern, but the infrastructure required for license tracking, knowing exactly what third-party code you use, is the same infrastructure needed for security tracking.
Fuzz testing. Google runs large-scale fuzzing campaigns against both internal and third-party code. OSS-Fuzz, their open-source fuzzing service, has found thousands of vulnerabilities in open-source projects before they were exploited in the wild.
SLSA: Exporting Google's Model
SLSA (Supply-chain Levels for Software Artifacts) is Google's attempt to codify their internal practices into a framework that any organization can adopt. It defines four levels of increasing rigor:
- SLSA 1: Documented build process and provenance.
- SLSA 2: Hosted build service and signed provenance.
- SLSA 3: Hardened build platform with tamper-resistant provenance.
- SLSA 4: Hermetic, reproducible builds with two-party review.
The genius of SLSA is its incrementalism. Google didn't reach Level 4 overnight, and they don't expect anyone else to either. Each level provides meaningful security improvements over the previous one. An organization at SLSA 1 is meaningfully more secure than one with no provenance at all.
Adoption has been growing steadily. GitHub Actions now supports SLSA provenance generation. Package managers like npm and PyPI are integrating provenance verification. The ecosystem is slowly building the infrastructure that makes supply chain security practical rather than aspirational.
What Google Gets Wrong (Or at Least Different)
Google's model isn't without limitations that other organizations should consider:
The monorepo isn't for everyone. The tooling investment required to operate a monorepo at scale is enormous. For organizations using a polyrepo model, different strategies are needed.
Google's threat model is unusual. They face nation-state adversaries routinely. Their security investment reflects that threat model. A mid-market SaaS company doesn't need the same level of paranoia, and overinvesting in security controls that don't match your threat model wastes resources.
Internal culture matters. Google's security practices work partly because of a strong internal security culture built over decades. You can't import the tools without the culture and expect the same results.
Scale creates its own problems. Some of Google's practices exist because of problems that only appear at their scale. Blindly copying practices designed for a million-engineer organization can create overhead that drowns a smaller team.
Practical Takeaways for Other Organizations
Despite these caveats, several Google practices are universally applicable:
- Know your dependencies. You can't secure what you can't inventory. Start with a complete, accurate SBOM.
- Control your build process. Even without full hermeticity, restricting what your builds can access reduces supply chain risk.
- Require provenance. Know where your artifacts came from and what process created them.
- Adopt SLSA incrementally. Start at Level 1 and work your way up. Perfect is the enemy of good.
- Automate vulnerability scanning. Manual processes don't scale, and delayed patching is a primary attack vector.
How Safeguard.sh Helps
Safeguard.sh brings Google-caliber supply chain visibility to organizations that don't have Google-caliber engineering budgets. The platform automatically generates and maintains SBOMs, tracks dependency provenance, and provides continuous vulnerability monitoring across your entire software portfolio. Safeguard.sh maps your dependency graph in real time, alerts on anomalous changes, and helps you build toward SLSA compliance incrementally. You get the transparency and control that Google built internally, without needing to build the infrastructure yourself.