DeepSeek's open-weight releases, particularly DeepSeek Coder and the DeepSeek V3 and R1 reasoning models, have shifted the open-weight landscape fast. The mixture-of-experts architecture gives inference cost that looks much more like a smaller dense model, while the code-focused training mix makes the coder variants very strong on a wide range of programming tasks.
For security review workflows, DeepSeek is a natural candidate. The question is the same one we keep coming back to: can a strong open-weight code model replace a dedicated security engine, or is the engine doing work that the model cannot?
What security review actually requires
Security review is distinct from code generation, even though they share a surface-level similarity. Code generation is generative: produce a plausible piece of code that solves a problem. Security review is evaluative: given a piece of code, produce a judgement about its correctness against a set of security properties.
Those two tasks have different failure modes. A code generator that produces something subtly wrong wastes a few minutes of the developer's time. A security reviewer that produces something subtly wrong either ships a vulnerability or blocks a safe PR, and both have real costs.
A production security review workflow covers, at minimum:
- Parse the changed code in the context of the surrounding codebase
- Identify security-relevant changes (authentication, authorisation, input validation, cryptography, data access)
- For each relevant change, reason about whether it introduces, preserves, or breaks existing invariants
- Produce a structured review with specific line-level comments, severity, and suggested fixes
- Track outcomes so that the reviewer's suggestions can be audited and improved
DeepSeek Coder, prompted well, can do the first three steps on a small, self-contained change. The last two are outside the model's scope.
Reasoning models and the cost of thinking
DeepSeek R1 popularised a particular style of reasoning model: long chain-of-thought before the final answer, with the reasoning exposed to the user. For security review, long reasoning is genuinely useful. The model has time to work through "is this input actually untrusted? what sanitisation is applied upstream? does the destination sink actually require sanitisation at this level?"
The cost is latency and tokens. A single-PR review with an R1-style model can take thirty seconds to a couple of minutes. That is acceptable for deep review of a high-risk change. It is unacceptable for gating every PR in a monorepo that merges two hundred PRs a day.
Griffin AI uses a tiered approach. Every PR gets a fast first-pass review by a smaller, cheaper model. Changes that touch security-relevant files, or that the first pass flags as uncertain, get escalated to a deeper reasoning model. The tiering is tuned against an evaluation set to keep the false-negative rate low without blowing the latency budget.
A team running DeepSeek R1 directly can implement the same tiering, but tiering is exactly the kind of system work that tends to be underestimated at project kickoff and overshot at delivery.
PR-scale context
Real PRs are not single-file changes. A meaningful change often touches a controller, a service, a repository layer, and a handful of tests. The security properties that matter might span files that were not modified but that are necessary to understand what was.
Griffin AI's retrieval system, when reviewing a PR, pulls:
- The files in the diff
- The callers of the modified functions, up to a configurable depth
- The callees of the modified functions, focused on security-relevant sinks
- The relevant test files
- Any related security policy definitions for the repository
DeepSeek's context window can hold all of this for a reasonably sized PR, but filling the context is the customer's problem. Choosing what to put in the context is itself a non-trivial piece of engineering. Put too little and the model hallucinates about unseen code. Put too much and the model loses focus on the actual diff.
The suggestion-versus-decision gap
The most important thing a security engine produces is not a list of suggestions. It is a decision: block this PR, approve with comments, approve cleanly, or escalate to a human reviewer.
Making that decision requires calibration. A suggestion that looks good in isolation might, across thousands of PRs, have a 40 percent false-positive rate. Knowing that requires measurement, and measurement requires an eval harness against labelled data.
Griffin AI runs a continuously updated eval set of real security review scenarios. New model versions are gated on not regressing the calibration. When a customer adopts a new policy, the calibration is re-run against their historical repo data before the policy goes live.
DeepSeek Coder, as a raw model, has no such calibration for your environment. You get a suggestion engine, not a decision engine. Turning it into a decision engine means building the eval harness, labelling data, running calibration, and keeping it fresh as the model and the repository both evolve.
Security review as a memory-heavy workflow
Every security review benefits from knowing what happened last time. If last week's reviewer suppressed a finding in this file with a specific rationale, this week's reviewer should see that rationale before commenting on the same issue again. If a pattern of findings has been consistently marked as false positive across a repository, the reviewer should learn that.
Griffin AI has explicit memory: suppression records, review history, policy exceptions, and an audit trail that links each decision to the evidence that supported it.
DeepSeek Coder has no memory. Each call is independent. Building memory on top of a stateless model means building a retrieval layer that pulls the right history into the prompt at the right moment. That is doable and also non-trivial.
When DeepSeek is a good fit
DeepSeek Coder is a strong choice for specific use cases:
- Offline security analysis where latency is not a constraint and the reviewer has time to read long reasoning traces
- Research on new review techniques, where the cost of iteration is low and the correctness bar is set by the researcher
- Environments with strict data constraints where the model must run on-premises
- As a component inside a larger engine, where the surrounding scaffolding turns it into a decision system
For a general-purpose security review workflow at the scale of a modern engineering organisation, Griffin AI is solving the broader problem. The model is maybe 20 percent of the work. The engine is the other 80 percent.
Closing the loop
A good security reviewer does not just produce comments. They watch what happens to those comments. Did the developer fix the issue or override the review? Did the fix introduce new issues? Did the override correlate with later incidents? Griffin AI tracks all of this and feeds it back into future reviews.
DeepSeek Coder has no way to close that loop on its own. The loop has to be built. Whether you build it or consume it is, in the end, the whole question. The model is a good model. The engine is the product.