The standard merger and acquisition technical diligence process has a structural flaw. The acquirer asks questions. The target company answers them. Nobody verifies that the answers are accurate. The buyer signs an agreement based on a stack of attestations and a few weeks of demos, and on the integration date discovers what the target's environment actually looks like.
In a calmer technology era, this was tolerable. The targets were smaller, the technology stacks were simpler, and the gap between attested and actual state was narrower. In 2026, with most acquired companies running multi cloud environments, AI models in production, and a layer of MCP servers on top, the gap is wide enough that material risk gets through diligence routinely.
This post is about using continuous asset discovery to fix the gap. The thesis is straightforward. If both parties can produce, on demand, a verified asset inventory derived from systems of record rather than from human memory, the diligence conversation moves from "tell us what you have" to "here is what we have." The diligence becomes a data exercise.
Why questionnaires fail in modern environments
Diligence questionnaires were designed for environments that no longer exist. They ask how many servers there are, how many engineers have admin access, how many production databases exist, and what packages are in use. In a modern environment, the answers depend on definitions that the target company may not even share internally.
How many production services exist depends on whether you count internal microservices, batch jobs, and serverless functions. How many third party packages are used depends on whether you count direct or transitive, and whether AI model artifacts are considered packages. How many AI assets exist depends on whether shadow models in feature branches count, and on whether vendor supplied embedded models are included.
Even when the target answers the questionnaire honestly, the answers are usually optimistic. The CTO reports the number the engineering directors gave them. The engineering directors reported the number their team leads gave them. Each level smoothed the data. Nobody is lying. Nobody is fully accurate either.
What due diligence actually wants to know
Strip away the questionnaire framing and the buyer is trying to answer a small set of practical questions.
What is the actual attack surface we are buying. Cloud accounts, repos, public packages, exposed services, external integrations.
What is the supply chain risk. Open source dependencies with known vulnerabilities, abandoned upstreams, licenses incompatible with our distribution model, vendor binaries with no provenance.
What is the AI footprint. Models in production, training data references, model licenses, vendor AI services with data sharing terms, agent surfaces.
What is the people footprint. Who has commit and admin access. What service accounts exist that no human owns. What vendor and contractor accounts will need to be transitioned.
What is going to break on day one of integration. Hardcoded credentials, customer specific configurations, dependencies on services the buyer is not planning to keep.
These questions are not really about counts. They are about understanding the nature of the environment well enough to plan integration without surprises. Continuous asset discovery answers them with data.
The diligence flow with discovery in place
When both companies maintain continuous asset inventories, the diligence flow changes shape.
Pre signing technical diligence becomes a structured comparison. The target shares an asset graph or a snapshot of it, scoped appropriately for confidentiality. The buyer's team imports the snapshot into a side environment and runs the same queries they run against their own assets. Coverage gaps, vulnerable dependencies, license risks, and shadow AI surface as findings rather than as bullet points in a slide.
The diligence conversation focuses on the findings rather than on the questionnaire. The target explains the context for high risk findings, the buyer assesses materiality, and both parties agree on remediation expectations before close.
Post close integration starts from a verified inventory rather than from a discovery exercise. The buyer's team knows on day one what will be merged, what will be retired, and what will need ongoing investment. The 90 day integration plan can be written against real data rather than against optimistic estimates.
Reps and warranties become testable. Statements like "all production services run currently supported open source dependencies" can be verified against the asset graph rather than taken on faith. This raises the cost of misrepresentation and lowers the probability of post close surprises.
Common diligence findings that asset discovery surfaces
Across deals, certain findings recur. They are rarely surfaced by questionnaires and almost always surfaced by discovery.
Forgotten public packages. The target organization published packages years ago under various scopes and accounts. Some of these are still installed by current consumers. Maintainer hygiene varies wildly.
Vendor binaries without provenance. The target runs a third party agent on every host that nobody can produce a build attestation for. The agent has root level access. The vendor is a small partner whose security posture is unknown.
Shadow AI in production. The target's flagship product uses managed LLM APIs. It also embeds half a dozen smaller models that nobody mentioned in diligence because the engineers who added them did not consider them AI assets.
Long tail of orphaned services. A few hundred small services exist in production that nobody actively owns. Many of them have not been updated in years. Several have known CVEs in their dependencies.
Misaligned ownership. The org chart says one team owns the platform. The asset graph shows that the platform's actual code, deployment, and runtime are owned by three different teams, and the lead of one of them gave notice last week.
None of these findings are necessarily deal breakers. All of them are inputs to valuation, integration planning, and post close prioritization. They are the kind of information that buyers want before signing, and that questionnaires consistently miss.
What the seller gets out of it
Sellers sometimes resist the idea of producing a verified asset inventory for diligence. They are concerned that findings will be used against them in negotiation. In practice, the sellers who have invested in continuous discovery come out ahead. Their environment looks well managed because it is well managed. The findings that exist are framed in their language, with their context, rather than discovered cold by the buyer's team. And the diligence concludes faster, which preserves transaction momentum.
For sellers who have not invested in discovery, the right time to start is well before a transaction is on the horizon. Discovery is a multi quarter capability to mature. It cannot be stood up in the three weeks before a diligence call.
How Safeguard Helps
Safeguard's continuous asset discovery produces an audit ready inventory at any point in time, which is exactly what M&A diligence requires. The unified asset graph covers software, AI, and MCP surfaces in one model, with provenance, license, ownership, and runtime telemetry attached to every asset. Snapshots can be exported for diligence with appropriate scoping and confidentiality controls.
The AI-BOM gives buyers a verifiable view of the target's AI footprint, including models, training data references, vendor AI services, and the agents that consume them. The MCP registry covers the agent surface area in the same level of detail. Findings, vulnerabilities, license risks, abandoned upstreams, shadow AI, surface in the buyer's review environment as the same data the target's own security team sees, so diligence becomes a shared analysis rather than a one way questionnaire. Continuous discovery turns M&A from a leap of faith into an inventory exercise.