Code signing keys are among the highest-value secrets in any modern engineering organization. A stolen signing key turns adversarial software into trusted software in the eyes of every operating system, browser, and downstream consumer that has been told to trust the corresponding certificate. The 2025 to 2026 incident record shows that adversaries understand this clearly and are investing accordingly.
This post traces the recent trend in code signing key theft, walks through the operator tradecraft, and identifies the structural defenses that have actually held up against active campaigns.
The shape of the trend
Public incident reports from 2025 alone documented at least fourteen confirmed code signing key compromises affecting commercially significant software publishers. The first quarter of 2026 has already added five more. The actual number is certainly higher because many compromises go unreported, are detected too late for clean disclosure, or surface as a footnote in a broader breach narrative.
The trend has three components.
Volume is up. The 2025 confirmed count exceeds the combined total of the prior three years.
Targeting is broadening. Earlier incidents concentrated on a few highly visible software publishers. Recent incidents spread across mid-sized vendors, internal enterprise signing infrastructure used for endpoint software, and increasingly, container image signing keys.
Operator sophistication is rising. The dominant tradecraft has shifted from opportunistic theft of poorly stored keys to deliberate, multi-stage operations that target signing infrastructure as a primary objective.
How keys get stolen
The 2025 to 2026 incident record clusters around five recurring root causes.
The first is endpoint compromise of developers or release engineers who hold signing key access. This is still the single largest root cause. A workstation compromise via phishing, info-stealer malware, or malicious browser extension yields keys stored on disk, in software keystores, or in the memory of running signing tools.
The second is CI/CD pipeline compromise. Many organizations have moved signing operations from individual workstations into centralized CI/CD systems for both efficiency and security. The unintended consequence is that signing keys, or short-lived signing tokens, now live within reach of any pipeline compromise. Several 2025 incidents involved attackers who first compromised a CI/CD environment for unrelated reasons and then discovered signing capability as a high-value side effect.
The third is third-party signing service abuse. Cloud-based signing services have become standard, and they reduce some risk by removing keys from organizational infrastructure. They introduce a new risk: credential theft against the signing service yields the same outcome as direct key theft, with the additional friction that the legitimate organization may not have visibility into how their signing capability is being used.
The fourth is HSM access path compromise. Hardware security modules are designed to make keys non-extractable, but a stolen credential or a compromised application server can still request signing operations through the HSM. The key never leaves the device, but the signing capability does.
The fifth is insider threat. A non-trivial share of 2025 incidents involved current or former employees with legitimate access who used it to sign content for unauthorized purposes. The traditional response of revoking credentials at termination remains imperfect in practice.
Operator tradecraft
The operator tradecraft visible in 2026 incidents has matured along several axes.
Reconnaissance is more thorough. Before targeting a signing capability, operators study the organization's release cadence, the names of developers who appear in release commits, the public portions of build pipelines, and any disclosed information about signing infrastructure. This reconnaissance lets them time the eventual misuse to look unremarkable.
Use is patient. The naive expectation is that a stolen key gets used immediately for high-volume malware signing. The actual pattern is increasingly the opposite: operators sit on signing capability for weeks or months, waiting for a high-value targeted operation. The signed payload, when it appears, is typically deployed against a small number of carefully selected victims, which limits detection visibility.
Distribution channels are selective. A signed payload distributed through email is often caught by gateway controls regardless of signature. A signed payload delivered through a compromised legitimate update mechanism, an MDM platform, or a partner integration tends to get further before defenders notice.
Cleanup is increasingly thoughtful. Operators who realize their access is at risk of detection sometimes deliberately overuse the capability for low-value purposes to draw revocation. Once revocation occurs and the organization moves to new keys, the operator's existing footholds in other systems may persist while the signing-key-specific incident closes.
Detection lag and damage
Detection lag is the structural problem. The fastest 2025 to 2026 incidents were detected within days of the first malicious signature, typically because the signed content reached enough victims to register on antivirus telemetry. The slowest incidents went months between first malicious use and detection, almost always involving low-volume targeted use.
Damage scales with detection lag. A short window means a focused remediation: revoke certificates, rotate keys, notify affected customers, accept some embarrassment. A long window means cascading remediation across every signed artifact produced during the window, customer impact across software updates that consumers had no reason to question, and lasting trust impact with downstream parties.
Several 2025 incidents have produced lasting trust impact that the affected organizations are still managing in 2026. The financial cost of these incidents is dominated not by the immediate response but by the sustained customer skepticism that follows.
Structural defenses
The defenses that have actually held up under adversary pressure cluster around a few structural patterns.
Hardware-backed signing where the key truly cannot leave the device. HSMs and hardware security keys eliminate the simple key theft case. They do not eliminate the signing-capability theft case, but they raise the cost.
Strong authentication on every signing request. WebAuthn-bound signing operations, where each signing event requires a fresh hardware authentication, prevent automated misuse of stolen credentials. The friction is real but the attack class it closes is significant.
Per-release ceremony for high-value signing. Organizations whose major releases require multiple humans to participate in a signing ceremony, with each participant authenticating separately, see meaningfully fewer single-actor compromise scenarios.
Signing identity scoping. Different signing identities for different use cases (release builds versus development builds, customer-facing artifacts versus internal artifacts) limit blast radius when any one is compromised.
Comprehensive signing telemetry. Logs of every signing event, with metadata about who requested it, what was signed, and where the request originated, allow detection of anomalous use patterns. Many 2025 incidents had signing telemetry available but underused.
Reproducible builds. When the relationship between source and signed binary is verifiable, an unauthorized signed artifact can be detected by comparison against the expected reproducible output.
Certificate transparency for code signing
Certificate transparency for web certificates has become a meaningful defensive layer. Code signing has been slower to adopt similar transparency mechanisms, but the infrastructure is starting to emerge.
Several initiatives now publish append-only logs of signed artifacts, allowing organizations to detect signatures issued under their identity that they did not authorize. Adoption is uneven, and the mechanisms differ across platforms, but the direction is clear. By 2027 or 2028, code signing transparency will likely be common enough that operators of stolen keys can no longer assume covert use.
In the interim, organizations that proactively monitor for unauthorized signatures using their certificates, even through ad-hoc means like file scanning of public download mirrors, see materially better detection times.
Recommendations for 2026
For organizations operating significant code signing infrastructure, four practices should be in place by mid-2026 if they are not already.
Move all production signing operations to hardware-backed keys with strong authentication. The cost is real but bounded. The risk reduction is structural.
Implement signing ceremony requirements for high-value releases. The friction is recoverable. The audit trail is durable.
Scope signing identities tightly. Different identities for different use cases. Minimum privilege for each.
Monitor signing telemetry actively. Logs that no one reads catch nothing. Logs that feed an alerting pipeline catch the things that matter.
How Safeguard helps
Safeguard tracks code signing artifacts across the supply chain inventory, surfacing signing identity usage patterns, certificate validity status, and provenance attestation gaps for each signed component in customer SBOMs. When signing infrastructure compromise is publicly disclosed, the platform identifies every affected component currently in customer environments and routes alerts directly to the responsible teams. Policy gates can require provenance attestations for signed components above a defined criticality threshold, and the AI remediation engine generates structured response plans when a signing identity in the customer's supply chain is compromised. For organizations whose trust models depend on signed artifacts produced anywhere in the supply chain, this turns code signing risk from an opaque external dependency into a tracked, governable surface.