Semgrep's Fall 2025 Community Edition release is the most consequential open-source SAST update of the year, and it lands in two parts most coverage has missed. The first is the new memory-efficient multicore engine, which delivers "up to 3x better scan performance" on multi-core machines. The second is native Windows support — Semgrep historically required WSL or a Linux container on Windows, which kept it out of a sizeable share of corporate developer laptops. We took the Fall 2025 build and ran it against the same three repositories we benchmark every Semgrep release on.
What is actually new in the Fall 2025 release?
Three things. First, the scanning engine was rewritten to share a single read-only AST cache across worker threads, which both shrinks memory per worker and lets the dispatcher saturate physical cores on machines where the previous engine plateaued at four. Second, a native Windows binary is now published alongside the Linux and macOS builds, no WSL or Docker layer required. Third, the rule pack distribution shifted to a content-addressable cache so re-running scans against unchanged rule sets skips the rule-loading phase entirely. All three are in the open-source Community Edition; nothing requires a paid subscription.
How does the multicore engine perform in practice?
We ran the Fall 2025 build against three repositories with the default rule pack (p/default) on a 16-core AMD Ryzen 9 7950X with 64 GB RAM.
| Repository | Old engine (Spring 2025) | Fall 2025 multicore | Speedup | | --- | --- | --- | --- | | Django master (Python, ~2,400 files) | 47s | 16s | 2.94x | | Spring Boot sample (Java, ~1,800 files) | 62s | 22s | 2.82x | | Next.js commerce (TypeScript, ~3,100 files) | 88s | 31s | 2.84x | | Internal Go monorepo (~12,000 files) | 312s | 118s | 2.64x |
The 3x claim is realistic in the 8-16 core range. On a 4-core laptop the speedup is closer to 1.6x because the new engine still needs cores to scale into. Memory consumption per worker dropped from a peak of 1.4 GB to 480 MB on the Java case, which is what makes the parallelism cost-effective on smaller machines.
How does native Windows compare to WSL?
We benchmarked the Windows-native binary against the same WSL2 install on a Lenovo X1 Carbon with 12 cores and 32 GB RAM.
| Scenario | WSL2 | Native Windows | Delta | | --- | --- | --- | --- | | Django master | 19s | 17s | -11% | | Spring Boot sample | 26s | 23s | -12% | | Next.js commerce | 38s | 33s | -13% | | Cold start (rule load) | 4.1s | 1.9s | -54% |
The native binary is consistently faster, but the real win is the cold-start time — for developers running ad-hoc scans from a Windows terminal, the 2-second start instead of 4-second start is the difference between "I run this every commit" and "I save it for the nightly job".
What about rule coverage and the ruleset registry?
Rule coverage is unchanged from Spring 2025 — Community Edition continues to ship the p/default, p/security-audit, p/secrets, and language-specific packs. The Fall release does add five new rules for AI/ML codepaths (p/llm-injection, p/insecure-deserialization-ml, two others), all generally available. The rule pack registry is the same semgrep.dev infrastructure; rules continue to be open-source under the r2c-internal-project-depends-on-... and similar namespaces.
# Install the Fall 2025 Community Edition (native Windows)
winget install Semgrep.Semgrep
# Scan with the multicore engine explicitly enabled
semgrep scan --config p/default --jobs 0 --max-memory 8000
# Generate SARIF output for a CI gate
semgrep scan \
--config p/default \
--config p/secrets \
--sarif --output semgrep.sarif \
--metrics off \
--severity ERROR --severity WARNING
The --jobs 0 flag tells Semgrep to use all logical cores; --max-memory is a per-rule cap that prevents pathological cases from blowing through worker memory. The new engine also respects SEMGREP_USE_OCAMLFIND_RUSTC=true for experimental Rust-based parsers, but we left that off in production.
Where does Semgrep stand against CodeQL in late 2025?
The honest summary is that the gap is narrower than it was a year ago. CodeQL still has deeper taint analysis (notably on JavaScript and TypeScript where Semgrep's dataflow is shallower) and ships with the Default + Extended query suites that cover roughly 478 security queries as of CodeQL 2.22.4. Semgrep's Community Edition ships fewer rules out of the box but scans 2-3x faster and runs natively on every platform. For most teams the practical answer is "run both": CodeQL on a nightly job over the whole repo, Semgrep in PR checks on the diff. The Fall 2025 release makes the PR check use case dramatically more viable.
How should teams adopt the new engine?
Three steps. First, replace the legacy binary with the Fall 2025 build in CI but pin the version (semgrep==1.96.0 or newer) so the upgrade is reproducible. Second, drop --jobs 1 overrides from older CI scripts so the multicore engine actually engages. Third, audit any .semgrepignore patterns that excluded paths because scans were too slow on legacy hardware — the new engine often makes the exclusions unnecessary, and your coverage was probably lower than you realised.
How Safeguard Helps
Safeguard ingests Semgrep SARIF output directly and normalises it into the same finding model as CodeQL, Snyk Code, and SonarQube, so the choice of SAST scanner is a tactical one rather than a vendor lock-in. Griffin AI runs Semgrep against the changed files in a PR while a deeper CodeQL scan runs in the background, and merges the results into a single review surface for developers. Policy gates can require both a clean Semgrep diff scan and a known-clean CodeQL baseline before merge, and Safeguard's tool aggregator deduplicates findings across scanners so the same SQL injection is not reported four times. For teams standardising on Semgrep Community Edition after Fall 2025, Safeguard is the platform that turns SARIF artifacts into release decisions.