January is usually a quiet month in product, but this one wasn't. We closed the signed chain from source to running workload with Lino attestations, turned on cache sharing between Griffin and Eagle, opened the self-healing workflow engine to general availability, and shipped fleet mode for the runner. Here is the full release log for the month.
What shipped in January 2026?
Shipped
- Lino runtime attestations — Lino now produces signed statements about running workloads: which image digest is actually running, which processes it started, which egress destinations it contacted. The attestation is bound to the pod UID on Kubernetes and to the invocation ID on serverless. Paired with Eagle's image attestation and Griffin's source attestation, this closes the full signed chain.
- Self-healing workflows (GA) — the self-healing workflow engine is out of beta. A self-healing workflow is a policy-bound automation that receives a finding, picks a remediation from Gold, applies it, runs the repo's own tests, and merges on green — all without a human in the loop. Out of the gate we support four classes of finding, covering about 60 percent of the typical queue.
- Griffin + Eagle cache sharing — the two engines now share a content-addressable cache keyed on file and layer digests. A Griffin scan and an Eagle scan of the same repo no longer re-do overlapping work. In our own monorepo this cut end-to-end scan time by about 35 percent.
- Runner fleet mode — the self-hosted runner can now be deployed as a fleet of workers coordinated by a single control node. The fleet shares cache, load-balances scan jobs, and auto-scales. This is the release enterprise customers were waiting on before moving off GitHub-hosted runners.
- Desktop: side-by-side verdicts — the desktop app now shows Griffin, Eagle, and Lino verdicts side by side for a given service, including any disagreement between them.
- MCP server: streaming results — the MCP server now supports streaming partial results from long-running scans. IDE agents can act on findings as they arrive instead of waiting for the full scan to finish.
Improved
- 100-level scan — the 100-level scan now includes a Lino attestation check. If a deployed service is expected to have a Lino attestation and doesn't, the scan surfaces it.
- Gold — Gold's PR annotations now include a "why" paragraph that explains the selected remediation in plain language, including which policy rules it satisfied. This is aimed at teams where security reviewers and engineers are different people.
- IDE extensions — JetBrains extension picked up support for the 2024.3 platform API. VS Code extension picked up workspace-trust integration.
- Workflows — a new
on.attestationtrigger lets workflows fire when an attestation is produced, not just when a scan completes. This is the hook point for downstream compliance tooling.
Deprecated
- The legacy SARIF exporter is removed. As announced in Q4.
- The pre-3.0 runner is entering EOL. It stops receiving detector updates at the end of Q1 2026.
sg remediateas a top-level CLI command is deprecated in favor ofsg gold planandsg gold apply, aligning CLI verbs with product names.
How do Lino runtime attestations complete the signed chain?
Answer-first: Lino attestations tell a verifier what actually ran, which is the one thing the source and image attestations couldn't tell them. Griffin proves a commit was scanned. Eagle proves an image was scanned. Lino proves an image was run, by a specific pod or function, and observed doing specific things. With the three together, a verifier can confirm that the thing running right now came from a scanned source, was packaged into a scanned image, and behaved within expected bounds.
Two things to know about the attestations:
- They are produced continuously, not once. Lino re-signs every 60 seconds by default, so an attestation staleness check is meaningful — if the last attestation is older than 5 minutes, the workload is probably no longer observed.
- They bind to a specific observer instance, so a tampered Lino agent can't produce attestations that impersonate a real one. The observer identity is rotated via Sigstore.
The attestations are verifiable with the standard in-toto tooling and with Safeguard's own admission controller, which can fail-closed on missing or stale Lino attestations.
What is a self-healing workflow, exactly?
A self-healing workflow is a closed loop that starts with a finding and ends with a merged fix. The loop is bounded by policy, so it only runs where the team has explicitly allowed it.
A canonical self-healing workflow for a patch-level transitive bump looks like this:
- Griffin finds a CVE in a transitive dependency.
- The workflow checks that the finding is reachability-confirmed.
- It calls Gold to plan a remediation, constrained to patch-level bumps.
- Gold returns a PR branch. The workflow runs the repo's test suite.
- If tests pass, the workflow merges the PR and closes the finding.
- If tests fail, the workflow leaves the PR open for human review and pages the on-call if the finding is above a threshold.
The January GA supports four classes of finding out of the gate:
- Patch-level bumps for transitives.
- Base image digest updates for unchanged Dockerfiles.
.npmrcand.pypirchygiene fixes.- Lockfile resync when the lockfile has drifted from the manifest.
More classes will land across Q1.
What does Griffin and Eagle cache sharing change?
The short answer is speed. Griffin and Eagle historically walked the same tree twice — Griffin for source, Eagle for the image built from the source — and shared nothing. They now share a content-addressable cache keyed on file and layer digests. A file that hasn't changed since the last scan isn't re-read; a layer that hasn't changed since the last scan isn't re-analyzed.
This matters most for monorepos and for teams that rebuild images on every commit. In our own monorepo, a combined Griffin + Eagle scan dropped from about 11 minutes to about 7 minutes. The improvement is larger for images with many cached layers and smaller for projects where most of the tree changes between scans.
Cache sharing is on by default and can be disabled per-workflow if a team needs cold-cache scans for audit purposes.
What is runner fleet mode?
Fleet mode is the answer to "we'd like to run the Safeguard runner ourselves at scale." Before January, self-hosting the runner meant running one process per worker, with no shared cache and no coordination. Fleet mode changes that.
A fleet is a set of runner workers registered with a single control node. The control node:
- Accepts scan jobs from Safeguard or from direct MCP calls.
- Load-balances them across workers based on current load and cache locality.
- Maintains a shared read-through cache so a finding resolved on one worker is cached for the fleet.
- Scales workers up or down based on queue depth.
Fleet mode runs on Kubernetes by default but has a plain-Linux mode for customers who don't run Kubernetes in their security infrastructure. Air-gapped installs are supported.
How Safeguard.sh Helps
January's net effect: Safeguard.sh now produces a signed, verifiable chain from source to running workload, acts on findings without human intervention where policy allows, and runs that whole loop at fleet scale inside your own infrastructure. Griffin and Eagle share cache and no longer duplicate work. Lino signs what actually ran. Gold turns findings into PRs. Self-healing workflows close the loop. The runner scales horizontally. The MCP server, the desktop app, and the IDE extensions surface the same verdicts across every tool your team uses. January was the month the platform's moving parts stopped being separate and started behaving like one thing.
What's next
February is expected to ship:
- Lino behavioral baselines across services — anomaly detection that compares a new version's runtime fingerprint to the baseline for that service.
- Eagle base image advisories with remediation suggestions from Gold.
- Griffin reachability for Rust (beta).
- A revamped workflow editor in the desktop app.
If any of those are blocking something for you, email us at contact@hsxtechnologies.com.