AI Security

Griffin AI vs Open Weights: The Eval Gap

Frontier models pass eval benchmarks that open-weight models miss by specific measurable margins. For security workflows, the gap matters.

Shadab Khan
Security Engineer
3 min read

Open-weight models — Llama, Mistral, Qwen, DeepSeek, Gemma — have closed the quality gap with frontier models on many general benchmarks. On security-specific benchmarks, a measurable gap remains. Whether that gap matters depends on the specific workflow, but for the workflows that dominate enterprise security (multi-hop reasoning over structured evidence, fix-PR generation with breaking-change awareness, adversarial-prompt resistance), the gap is operationally significant.

Where the measurable gap exists

Three benchmark families show a persistent delta:

  • Complex reasoning. Multi-hop exploit hypothesis accuracy on reachable taint paths. Frontier models currently produce ~80% accuracy; leading open-weight models currently produce ~55-65%.
  • Fix-PR correctness. Compile-and-pass-tests rate. Frontier models ~73%; leading open-weight models ~50-60%.
  • Adversarial resistance. Refusal rate on jailbreak attempts. Frontier models ~98%; open-weight models variable, often below 80% without additional guardrails.

These numbers change as open-weight models improve. The gap is narrowing; as of 2026 it remains operationally meaningful.

Where the gap does not matter

Three workloads:

  • Simple pattern recognition. Identifying well-known vulnerability shapes.
  • Bulk summarisation. Condensing many findings into an executive view.
  • Classification. Routing findings to appropriate queues.

For these, open-weight models are suitable. The quality is sufficient.

How fine-tuning narrows the gap

Fine-tuning open-weight models on security-specific data can narrow the gap on specific tasks. Reports of fine-tuned Llama variants reaching frontier-model accuracy on narrow tasks are credible.

Four caveats:

  • Fine-tuning requires training data. High-quality security training data is hard to source.
  • Fine-tuning narrows capability. A model fine-tuned for vulnerability triage may lose capability on adjacent tasks.
  • Maintenance burden. Retraining on new data is an ongoing project.
  • Provenance concerns. Fine-tuning introduces its own supply chain.

For organisations with the engineering capacity, fine-tuned open weights can work. For most, the tradeoff favours frontier models with engine grounding.

How Griffin AI addresses this

Griffin AI uses frontier Claude models as the reasoning layer and adds security-specific grounding. The model-level eval gap between frontier and open-weight doesn't appear because Griffin AI is using the stronger side of the comparison. The grounding layer reduces the amount of heavy-lift reasoning the model has to do, which makes even small quality improvements compound.

What to evaluate

Three concrete checks:

  1. Benchmark your candidate open-weight model on your specific security tasks.
  2. Compare the cost of closing the eval gap (fine-tuning, infra, engineering) against frontier-model licensing.
  3. Decide based on total cost of quality rather than model-list price.

How Safeguard Helps

Safeguard's Griffin AI runs on frontier models where the eval gap favours them. For customers whose on-prem posture allows private frontier-model endpoints, the combination delivers frontier quality with on-prem deployment. The engine-plus-LLM architecture keeps the cost efficient while maintaining the quality ceiling.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.