Seven categories. Roughly fifty capabilities. Each one with a two-sentence description and a clear answer to “what powers it.” For the full line-item inventory of every model, scanner, and feed, see . For the per-model task matrix, see .
The platform reads your code, containers, manifests, and history and flags what is actually wrong. Eight different detection paths feed one normalised verdict, so a finding from a sink scan, a CVE feed, and a license rule all land in the same review queue.
Catches deserialization, SSRF, unsafe SQL, prototype pollution, and ReDoS as the developer types. Runs at sub-100 ms p95 because it does not phone home.
Finds API keys, OAuth tokens, signing keys, and database credentials across the working tree and commit history. Combined regex baseline with a learned classifier to suppress noise.
Grades the strength of every sanitiser the codebase relies on, so a weak escape function does not get treated like a strong one. Each call site gets a confidence score, not a boolean.
OS-package and application-layer vulnerability matching against curated databases for every layer in an image. Works on registries, build tarballs, and running pods.
Purl-precise matching of every direct and transitive dependency against the public advisory ecosystem. Range-aware so a partially-patched version is still flagged correctly.
Spots typosquats, dependency confusion, and post-install hook anomalies before the package ever resolves into a lockfile. Score blends name distance, age, signal entropy, and behaviour heuristics.
SPDX-aware license detection with obligation classification and policy hooks. Surfaces strong-copyleft, attribution, and patent-grant terms separately so legal review is targeted.
Project-wide size, language mix, and rate-of-change measurement so risk weighting reflects where the team is actually moving. Used as a prior by every downstream model.
Detection finds candidate issues; reasoning decides whether they are real, how they connect, and what they would let an attacker do. This is the model layer doing the work an analyst would do, with a structured trace you can audit.
Every candidate finding is mapped to the appropriate CWE class with a justification snippet. Lets the rest of the pipeline reason about category, not just symptom.
Generates plausible exploit shapes for a finding given its sinks, reachable sources, and surrounding sanitisers. Outputs are hypotheses, not assertions, and travel into the disproof pass.
Tracks data flow from untrusted source to dangerous sink across module boundaries up to twelve hops deep. Cites the file, function, and line at every step in the chain.
When the path is longer than twelve hops or threads through framework internals, this routes to a deeper reasoner that holds the full graph in context. For the hardest chains in the codebase.
Treats the entire finding set as a single context window so it can spot duplicates, dependency clusters, and root causes that single-finding reasoning misses. Cuts queue volume sharply.
Each hypothesis is sent through a second pass that actively argues the finding is wrong. Only hypotheses that survive this attack become the verdict you see.
Every verdict ships with a HYPOTHESIS, CITED PATH, DISPROOF, and PROPOSED PATCH block. Auditable, replayable, and stable across model upgrades.
Eagle takes thousands of candidate paths and ranks them by reachability and exploitability, then clusters near-duplicates so the queue is in priority order before a human looks at it.
Verdicts are useful only if a fix lands. This category covers everything from a one-line suggestion in the IDE to a multi-service patch campaign that opens PRs, runs tests, and tracks the merge.
Inline patch suggestion at the cursor for the cheapest class of fixes. Designed for IDE latency, with a reasoning trace one keystroke away.
Generates a pull request that fixes the issue and links the full structured reasoning trace in the description. The reviewer sees the why, not just the diff.
When a fix requires inserting or strengthening a sanitiser, the model picks the right one for the framework, not a generic placeholder. Reads the project's existing patterns first.
Drives the same fix across dozens of services in coordinated waves with per-repo PRs, rollback hooks, and a campaign-level dashboard. For the upgrades that used to take a quarter.
When the root cause is in an upstream open-source dependency, the platform drafts the upstream patch and the disclosure note in parallel with your local mitigation.
Every auto-fix is applied inside an ephemeral sandbox that runs the project's test suite before the PR is opened. A failing patch never reaches a reviewer.
Major-version dependency bumps are validated against your real test suite with breaking-change analysis attached. You get a green PR or a clear blocker, not a guess.
Policy is the difference between a finding and a decision. This category covers authoring, enforcing, and exception-managing the rules that decide what merges, what deploys, and what an agent is allowed to do.
Write policies in a familiar, declarative DSL with type checking, dry-run replay against historical findings, and versioned releases. The policy is the source of truth.
Runs in the CI pipeline and blocks merges that violate policy, with the violating finding and its trace surfaced inline on the PR. No back-channel approvals.
Kubernetes admission webhook that rejects workloads whose SBOM or runtime posture violates policy. Stops drift between what passed CI and what is actually running.
Temporary policy waivers with a hard expiry, a named approver, a justification field, and a full audit-log entry. Closes on its own; cannot be quietly extended.
Each agent identity gets a narrowly scoped set of allowed tools, files, and network reaches. Out-of-scope calls are denied at the server, not the agent.
Outgoing prompts and tool calls are inspected by an on-device classifier for secrets, PII, and intellectual property before they leave the boundary. Block, redact, or quote.
Every prompt, tool call, and model response is recorded into an append-only signed log. Reproducible runs, regulator-grade trail.
Different agents (CI, IDE, on-call, vendor) see different toolsets, file roots, and rate limits, all driven by signed identity. No shared god-mode token.
Auditors do not want screenshots; they want signed artefacts. This category produces the documents that map your platform output to whichever framework the regulator is asking about this week.
Every build emits a signed bill of materials in both CycloneDX 1.6 and SPDX 2.3, including dependency relationships, license metadata, and vulnerability annotations.
Signed attestations for every build step, scan run, and patch application. Verifiable chain-of-custody from source commit to deployed image.
Author and publish CycloneDX or CSAF VEX statements that record exploitability decisions per component. Downstream consumers stop firing on irrelevant CVEs.
Control-by-control bindings for SOC 2, ISO 27001, FedRAMP HIGH, CMMC, NIST SP 800-218, EO 14028, NIS2, DORA, DPDP, and HIPAA. Refreshed when the frameworks revise.
Maps your evidence store to the questions on a customer security questionnaire and drafts the answers, with citations. A two-day exercise becomes an hour.
Append-only audit log exported in JSON plus CycloneDX evidence form, signed and verifiable. Hand it to the regulator without a phone call.
Bundles SBOMs, attestations, audit logs, compliance mappings, and exception records into a single signed archive ready for submission. Pre-formatted per framework.
An inventory of every model, prompt, and tool an agent touched in your SDLC, with versions and access scopes. The bill of materials for the AI side of the supply chain.
The platform runs wherever your trust boundary requires. Five deployment tiers and two specialised inference modes cover everything from a hosted multi-tenant trial to a sovereign air-gapped install.
Tier 1 deployment. Fastest onboarding with per-tenant prompt and KV-cache isolation. Best fit for teams that need the full surface without operating the cluster.
Tier 2 deployment. Isolated hardware, SHA-pinned weight attestation, deterministic latency. Same platform, your own inference plane.
Tier 3 deployment. The full platform runs inside your VPC with bring-your-own KMS and customer-controlled network boundaries. No data leaves your perimeter.
Tier 4 deployment. No outbound network egress, on-prem GPU, offline vulnerability database bundles, and signed export pipelines for the audit log.
Tier 5 deployment. In-country residency, regulator-attested key custody, and the full reasoning ladder including the 671B mixture-of-experts variant on-prem.
Eagle and the larger Griffin variants run on a managed batched-inference pool that scales to demand. You pay for throughput, not idle GPUs.
Lino runs entirely on developer hardware with no network egress. Inline IDE checks, secret detection, and egress classification stay on the machine.
The platform comes with operating commitments, not just software. SLAs, support tiers, audit streaming, training, and a published roadmap turn the engagement into a relationship you can run a programme against.
Availability targets for the API, portal, and MCP surfaces scale with the tier you choose. Credits are attached to misses; reports are published monthly.
Direct access to the engineers who wrote the code, on a real on-call rotation. No vendor support tier sits between you and the source.
Raw uptime, latency, and incident timelines monthly. A QBR every quarter covers posture, roadmap, and what to plan for next.
Append-only audit events streamed to your SIEM in real time with SIEM-ready schemas. You own the storage; we own the delivery.
A contractual commitment to notify within 24 hours of confirmed compromise affecting your tenant. With facts, not a placeholder email.
Quarterly roadmap brief covering upcoming model variants, scanner additions, compliance packs, and deployment shapes. Under NDA so we can be specific.
Enablement curriculum for security, platform, and engineering teams. Refreshed at each model release and credentialed for procurement.
RSS, JSON, and STIX feeds of malicious-package finds, novel exploit classes, and remediation guidance. Free to read; useful even without a contract.
The platform is single-purpose by design. Here is what is deliberately out of scope so you do not have to ask.
This page is the capability ledger. The pages below cut the same surface from a different angle.
Detection, reasoning, remediation, governance, compliance, deployment, operations. Pick the slice you need; we ship the rest under the same contract. Book a walk-through and we will demo the capabilities most relevant to your stack.