AI-for-security procurement that stops at feature comparison misses the structural differences that determine year-two outcomes. Due diligence — the methodical process of surfacing architectural, operational, and commercial realities — is how enterprise buyers separate vendors that will produce outcomes from vendors that will produce surprises.
The checklist
Twelve questions:
- Architecture. Engine-plus-LLM or pure-LLM? What grounds the reasoning?
- Eval methodology. What benchmarks are published? With what dataset provenance?
- Model supply chain. What model powers the reasoning? How is it version-pinned?
- Deployment options. SaaS, on-prem, air-gapped — what is supported?
- Data handling. Where does customer data go? What is retained?
- Integration surface. API coverage, SDK maintenance, webhook semantics.
- Audit trail. Structured logging; can the customer reconstruct decisions?
- Compliance posture. SOC 2, FedRAMP, EU AI Act — actual status vs roadmap.
- Incident response commitments. 72-hour notification; named escalation.
- Pricing model. Per-token? Per-seat? Bounded variability?
- Reproducible TCO model. Three-year forecast the vendor will defend.
- Exit story. Switching cost; data export; contract termination terms.
Each question has a concrete answer. Vendors who give concrete answers to all twelve are finalist-grade. Vendors who can only answer some are earlier-stage.
How to run the process
Three phases:
- Written questionnaire. Questions above.
- Technical walkthrough. Deep dive on architecture and deployment with the vendor's engineering.
- Reference calls. Two or three customers running in similar environments.
Timeline: 6-8 weeks for a thorough process.
How Safeguard Helps
Safeguard's procurement packet includes pre-written answers to the twelve questions above. Customers can compare them directly against other vendors' responses. For organisations running rigorous AI-for-security due diligence, this shortens the process from months to weeks.