Incident Analysis

NVIDIA LAPSUS$ Breach: Stolen Code Signing Certificates Used to Sign Malware

When LAPSUS$ breached NVIDIA, they stole code signing certificates that were immediately weaponized to sign malware. The incident demonstrated how trust mechanisms become attack vectors.

Yukti Singhal
Security Researcher
7 min read

In late February 2022, the LAPSUS$ extortion group breached NVIDIA and stole approximately 1TB of data. The stolen data included employee credentials, proprietary GPU designs, driver source code, and critically, two code signing certificates. Within days, those certificates were being used in the wild to sign malware, including Cobalt Strike beacons, Mimikatz variants, and backdoors. The NVIDIA breach demonstrated a particularly dangerous form of supply chain attack: not one where malicious code is injected into a legitimate product, but one where the trust mechanism itself, the code signing certificate, is stolen and repurposed.

The Breach Timeline

LAPSUS$ claimed to have first gained access to NVIDIA's systems in late February 2022. By February 26, they were leaking data on Telegram. Their initial demands were unusual: they wanted NVIDIA to open-source their GPU drivers and remove the LHR (Lite Hash Rate) limiter from their graphics cards, a restriction designed to make the cards less attractive to cryptocurrency miners.

When NVIDIA didn't comply, LAPSUS$ escalated their leaks. On March 1, they released portions of NVIDIA's source code along with the code signing certificates. The certificates were expired but still accepted by Windows for driver signing purposes due to how Microsoft handles legacy driver signatures.

NVIDIA confirmed the breach and stated that they had no evidence of ransomware being deployed on their systems. They characterized the breach as a data theft and extortion attempt. The company began working with law enforcement and cybersecurity firms to investigate and remediate.

The Code Signing Certificate Problem

The two stolen certificates were NVIDIA Corporation certificates used to sign Windows drivers and software. Here's why their theft was so consequential.

Windows trusts NVIDIA's certificates. When Windows encounters a driver or kernel-mode code signed by NVIDIA, it allows the code to load. This is by design; NVIDIA is a legitimate hardware vendor, and their drivers need kernel access. But this trust relationship doesn't distinguish between code that NVIDIA actually authored and code that an attacker signed with a stolen NVIDIA certificate.

Expired certificates still work for driver signing. Microsoft's driver signing policy accepts signatures from expired certificates if the code was signed before the certificate expired, or if the certificate was previously trusted. The stolen NVIDIA certificates, while expired, were still accepted for driver signing on many Windows configurations. This meant attackers could sign malicious drivers that would be loaded by Windows without warning.

Driver-level access is the highest privilege. Code that runs in the Windows kernel has unrestricted access to the system. A malicious driver can disable security software, hide processes and files, intercept network traffic, and persist through reboots. Kernel-level malware is extremely difficult to detect and remove.

Within 48 hours of the certificates becoming public, security researchers observed multiple malware samples signed with the stolen NVIDIA certificates:

  • Cobalt Strike beacons configured as kernel-mode drivers
  • Modified versions of Mimikatz, the credential theft tool
  • Custom backdoors designed for persistent access
  • Drivers designed to disable endpoint protection software

These signed malware samples were not created by LAPSUS$ themselves. The certificates were publicly available, so any attacker could use them. The threat expanded far beyond a single group.

The Trust Model Failure

Code signing exists to solve a fundamental problem: how does a computer know whether to trust a piece of software? The answer, in the current model, is that trusted authorities vouch for the software's authenticity. Certificate authorities vouch for the identity of the signer. The signer (NVIDIA, in this case) vouches for the software's legitimacy. The operating system (Windows) trusts the chain of authorities.

This model has a critical weakness: it assumes the signer's private key is never compromised. If the key is stolen, the entire trust chain collapses. Malware signed with a legitimate certificate is, from the operating system's perspective, indistinguishable from legitimate software.

The NVIDIA breach exposed this weakness in stark terms. Every Windows system in the world that trusted NVIDIA's certificates was now potentially vulnerable to malware signed with those certificates. The certificates couldn't be instantly revoked across every system. Even after Microsoft added the certificates to their revocation list, systems that didn't receive updates remained vulnerable.

Microsoft's Response

Microsoft took several steps to mitigate the risk from the stolen certificates. They added the compromised certificates to their revocation list, and Windows Defender was updated to detect known malware samples signed with the stolen certificates.

However, the response highlighted the limitations of certificate revocation. Not all systems check revocation lists in real-time. Offline systems, air-gapped environments, and systems with delayed updates remained vulnerable. And detecting "known malware samples" doesn't help when attackers create new, unique malware signed with the same certificates.

Microsoft also updated their driver signing policies to make it harder for expired certificates to be used for new driver signatures. But these policy changes took time to propagate across the Windows ecosystem.

Broader Implications for Code Signing Security

The NVIDIA certificate theft reinforced several uncomfortable truths about code signing:

Certificates are high-value targets. Any organization that holds code signing certificates, which is every software vendor, needs to protect those certificates with the same rigor applied to the crown jewels. Hardware Security Modules (HSMs), strict access controls, and monitoring for unauthorized use are all essential.

Revocation is slow and incomplete. The infrastructure for revoking compromised certificates and propagating that revocation to every affected system is fundamentally slow. There is always a window between compromise and effective revocation during which the stolen certificate can be used freely.

Binary trust is brittle. The current model of "trust this certificate or don't" lacks nuance. A more robust model would consider context: is this driver consistent with what NVIDIA typically ships? Was it signed from NVIDIA's known signing infrastructure? Does it match any known NVIDIA product? These contextual checks could catch malicious use of legitimate certificates.

Certificate theft enables persistent access. Unlike exploiting a vulnerability that can be patched, a stolen code signing certificate provides indefinite access until it's comprehensively revoked. This makes certificate theft one of the most valuable outcomes of a breach for an attacker.

Protecting Code Signing Infrastructure

Organizations that handle code signing certificates should implement several protective measures:

Use HSMs for key storage. Hardware Security Modules prevent key extraction. Even if an attacker compromises the signing server, they can't steal the private key if it's in an HSM. They can potentially use it to sign unauthorized code, but they can't distribute the key to others.

Implement signing approval workflows. Every signing operation should require approval from authorized personnel. Automated signing without human oversight creates risk if the signing pipeline is compromised.

Monitor for unauthorized signatures. Maintain a log of every signing operation and alert on any signature that doesn't correspond to a known release or build.

Use short-lived certificates. Certificates with shorter validity windows reduce the blast radius of theft. If a certificate expires in 30 days, the window for abuse is inherently limited.

How Safeguard.sh Helps

Safeguard.sh helps organizations maintain the integrity of their software artifacts through comprehensive SBOM tracking and build verification. Our platform monitors the signing status and certificate chains of your software components, alerting you when certificates in your dependency chain are compromised or revoked. Policy gates can enforce that only artifacts signed with approved, current certificates are accepted into your build pipeline. When a breach like NVIDIA's puts trusted certificates in attacker hands, Safeguard.sh's independent verification ensures that your organization's software supply chain doesn't inadvertently trust malware signed with stolen credentials.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.