Most organizations have spent the last two years building two separate inventories. One is the software inventory, populated by SBOMs, container scans, package telemetry, and SCM analysis. The other is the AI inventory, populated by model registries, prompt repositories, and the various ad hoc spreadsheets that AI governance teams maintain. The two systems rarely talk to each other. They use different identifiers, different ownership models, and different review processes.
In real production environments, the assets are not separate. A service that runs a model is also a piece of software. A model that consumes embeddings depends on a vector database that is part of the software stack. An MCP server is software that exposes AI capability. Treating these as parallel inventories creates gaps exactly where the most interesting risks live, in the seams between the two domains.
This post argues for a single asset graph that covers both software and AI, and walks through what such a graph needs to capture and why the alternative does not work at scale.
The hidden cost of parallel inventories
When the software inventory and the AI inventory are separate, several practical problems emerge.
Ownership becomes ambiguous. A service has a software owner, the team that builds the API. It has an AI owner, the team that trained the model it loads. When a finding hits the model, the platform has to figure out which team to route it to. In the parallel world, this routing is a human decision that often takes days. In a unified graph, ownership is a property of the linked artifact, derived once and applied consistently.
Risk paths get truncated. A vulnerability in a library used to load model weights affects the model. A vulnerability in the model affects the agent that calls it. A vulnerability in the agent affects the data classes the agent can reach. In two inventories, this chain is impossible to traverse without manual stitching. In one graph, it is a single query.
Reviews duplicate. The same change can trigger an SBOM review, an AI governance review, and a vendor risk review, each in a different system, with different reviewers, against different policies. Engineers correctly perceive this as overhead and route around it. The unified graph does not eliminate the need for review, but it allows the reviews to share state.
Reporting lies by omission. When the security team reports on coverage, the report covers the inventory the report queries. If software and AI live separately, every report is partial. Auditors and executives both notice eventually.
What "one graph" actually means
A unified asset graph is not a single big table. It is a typed graph with consistent identifiers, where each asset class has its own schema and the edges between asset classes are first class.
Asset classes that need to be in the graph include source repositories, build artifacts, container images, deployed services, packages, models, datasets, prompts, MCP servers, MCP tools, agents, and the runtime instances of all of the above. Each of these has properties that matter for security, ownership, version, provenance, license, last seen, and so on.
The interesting edges include depends on, deploys, loads, calls, exposes, exports, owns, and deprecates. These edges let you ask questions like "which agents call any MCP tool that is exposed by a server that loads a model whose weights came from a now untrusted publisher" without writing custom integration code each time.
Identifiers must be stable across environments. A model's identity in the graph should be derived from its weights hash, its source registry, and its version, not from the file path on a particular host. A service's identity should be derived from its source repo and deployment, not from its current pod IP. Without stable identifiers, the graph fragments and queries return inconsistent results.
Reconciliation is the actual work
Building the graph is mostly reconciliation. Each source system that contributes assets has its own identifier scheme. The container registry knows images by digest. The Kubernetes API knows pods by UID. The model registry knows models by version. The SCM knows repositories by name. None of these identifiers are usable across systems without mapping.
The reconciliation pipeline does several things. It normalizes identifiers into a single canonical form per asset class. It deduplicates. The same logical artifact may appear in multiple registries, in multiple environments, and at multiple lifecycle stages. The graph stores it once and tracks the deployments separately.
It infers edges from telemetry. A pod that loads a model file at startup is observed via runtime telemetry. The graph learns the loads edge from this signal even if no one declared it. A service that calls an MCP tool is observed via gateway logs. The graph learns the calls edge.
It tracks lineage. When a service is rebuilt and redeployed, the graph remembers that the new deployment is the successor of the old one. When a model is fine tuned from a base, the new model's lineage points back to the base. Without lineage, every change creates a new orphan asset and the graph stops being a graph.
Querying the graph for security work
A unified graph is only useful if the security team actually queries it. Practical query patterns include:
Blast radius queries. Given a vulnerability in a specific package, model, or MCP tool, find all services, agents, and data flows that depend on it.
Ownership queries. Given a finding, find the team that owns the asset and the immediate human responsible.
Drift queries. Given a baseline graph from last quarter, find the assets that have changed in ways the policy considers risky.
Coverage queries. Given a control, scanning, signing, EDR, identify the assets where the control is missing.
Lineage queries. Given an asset, walk back through builds, models, and dependencies to identify the upstream provenance.
These queries are not specific to software or AI. They are specific to "assets the organization runs." That is the whole point of unifying the graph.
What it does not solve
A unified graph is not a magic fix. It does not write your policies, evaluate your risks, or replace the human judgment in your governance program. What it does is make the data substrate consistent so the policies and the human judgment do not have to wrestle with two parallel realities. It also does not eliminate the need for specialized tools. Model evaluation, container scanning, and agent observability all still happen. The graph is the connective tissue, not a replacement for the specialists.
How Safeguard Helps
Safeguard is built on a single unified asset graph that covers software and AI in one model. Continuous asset discovery feeds the graph from SCM, build, registry, runtime, and AI source systems, and reconciliation produces stable identities for every asset class with the edges that link them. Software dependencies, AI models, MCP servers, agents, and the runtime workloads that bind them all share one schema and one query layer.
The AI-BOM is a first class part of the graph rather than a separate document. Every model carries its provenance, license, training data references, and consumer linkages, and the graph knows how the model relates to the services that load it and the agents that call it. The MCP registry extends the same model to agent tooling. Continuous discovery, AI-BOM, and the MCP registry together make "one graph" not a slogan but the operational substrate every Safeguard control runs against.