Why is the DPRK IT worker story a supply chain problem?
Because the attacker is not outside your perimeter. They are on your payroll, holding valid credentials, committing code to your repositories, and reviewing pull requests. Publicly documented since 2022 by the U.S. Treasury, State Department, and FBI, the pattern involves North Korean nationals, often working from locations such as Russia, China, and Southeast Asia, using stolen or fabricated identities to obtain remote engineering roles at Western companies. The U.S. Department of Justice has unsealed multiple indictments involving "laptop farms" in the United States, where facilitators in American cities receive and operate company-issued laptops on behalf of overseas DPRK workers.
From an engineering risk standpoint, this is a supply chain attack because the software supply chain is the code your own developers write, and some of those developers are adversaries acting as insiders with full trust.
What does the typical pattern look like in public cases?
Answer first: fake identity, clean interview, solid tenure, eventual risk events.
Based on public DOJ indictments, the KnowBe4 incident that KnowBe4 itself disclosed in 2024, and reporting by firms like SecureWorks (GroupSense), Mandiant, and CrowdStrike:
- A remote engineering role is posted (often Node.js, React, Python, or blockchain-related).
- A resume is submitted under a stolen or fabricated identity. The candidate interviews competently, often via video with face-altering or stand-in arrangements.
- Hiring, background check, and onboarding proceed. The company-issued laptop ships to a U.S. address controlled by a facilitator. The facilitator installs remote-access software; the actual worker logs in from overseas.
- The new hire performs adequate engineering work, often for months. Pay flows to accounts that route to DPRK-controlled channels.
- At some point: attempted data exfiltration, malware execution, or access brokering to external DPRK-nexus groups. Not every case involves overt malicious action; some are primarily a sanctions-evasion revenue play. But the access is there.
KnowBe4's public incident disclosure is a useful reference because the company was honest and specific: the hire was detected shortly after day one when they attempted to load malware onto the company-issued laptop, and KnowBe4 walked through what their controls caught and what did not.
How do these workers pass interviews?
From public reporting:
- Resumes are well-tailored. Several have used real GitHub activity either purchased or stolen from legitimate developers.
- Technical interviews are passed either by the actual operator (competent developers) or with live coaching from a team.
- Video interviews increasingly involve deepfake or face-altering technology. This has been publicly discussed by companies that caught attempts, though many cases still use simpler approaches.
The uncomfortable truth is that basic competence plus a strong narrative often beats unstructured interview rubrics.
What should hiring-side controls look like?
- Identity verification beyond paperwork. Use a vendor that validates government ID against liveness video, and cross-checks name, address, and SSN/tax ID data against authoritative sources. Standalone E-Verify does not catch this.
- Address verification for laptop delivery. If a U.S. hire wants a laptop sent to a warehouse-style address, that is a flag. Multiple indictments involved facilitator addresses that were obviously residential-commercial hybrid drops on closer inspection.
- Video interview with identity artifact check. Ask the candidate to hold up their ID during one interview segment. Not definitive, but it raises the operator-cost of running a stand-in.
- Background check scope. Many specialist background-check vendors have added a "DPRK risk" module that scans for indicators like stolen-identity signals, residency patterns, and correlations with known facilitator networks. It is worth the uplift for remote roles.
- Probationary period with fewer privileges. Ninety-day trust ramp. Access to production data, signing keys, and sensitive code should not be granted on day one regardless of the hire.
What should post-hire controls look like?
- Workstation-based anomaly detection. Residential VPN use, unusual keyboard layout switching, consistent off-hours work patterns in a given timezone when the employee is "in" a different one. Several public cases were caught when endpoint telemetry showed the laptop's actual source network did not match the employee's claimed location.
- Code-commit anomaly detection. Unusual PR patterns, unusual time-of-day commits that do not match the claimed timezone, commits at the boundaries of areas the worker was not assigned, use of obscure dependencies or novel cryptographic primitives. GitHub and GitLab audit-log streaming plus a baseline is enough to catch several of the 2024 cases after the fact.
- Remote-access tool monitoring. Any unsanctioned RMM agent (AnyDesk, TeamViewer, Chrome Remote Desktop) installed on a corporate laptop should be a page, not a ticket. Several detected cases surfaced when EDR flagged a remote session tool the employee had no business installing.
- Secret scanning at pre-commit and in CI. Some insider cases have involved planted credentials or webhook endpoints in code. Pre-commit secret scanning catches some; post-commit audit catches more.
- Dependency review. A new dependency added to a project should be reviewed. DPRK-nexus actors have been tied to multiple npm-registry poisoning campaigns, and a malicious insider introducing such a package is a realistic scenario.
What does the risk look like at a codebase level?
You should expect an insider with commit access to be capable of any or all of:
- Introducing subtle vulnerabilities (logic flaws, off-by-one errors in access checks, missing authorization) that bypass automated scanning.
- Adding malicious dependencies via package pinning changes.
- Staging credential theft through seemingly innocuous telemetry or logging additions.
- Exfiltrating source code and secrets gradually.
The generic mitigation is code review by someone whose trust is well-established and automated controls that do not rely on the reviewer's attention span. CodeQL, dependency review bots, secret scanners, and commit-signature enforcement all contribute.
What is the supply chain angle for downstream customers?
If your vendor or OSS maintainer unknowingly employs a DPRK IT worker, you inherit the risk. Practical mitigations at your end:
- Prefer vendors who publish SBOMs and sign releases with verifiable provenance.
- Require vendors to have documented insider-threat programs for production-code contributors.
- Monitor releases of critical open-source dependencies for sudden introduction of new maintainers who are not known in the community; multiple recent package-compromise cases started with a friendly new contributor being granted maintainer rights.
Is this problem going away?
No, and probably not soon. Sanctions regimes against the DPRK incentivize exactly this kind of revenue-generating fraud. U.S. Treasury OFAC advisories and the 2023 and 2024 updated DOJ indictments have steadily raised the enforcement tempo, but the model is durable because remote work, fragmented hiring pipelines, and rich payment rails make it economical. Expect the problem to persist and the tradecraft to improve, particularly around identity deepfakes.
How Safeguard.sh Helps
Safeguard.sh extends supply chain visibility into the human supply chain of your software. We monitor your engineering-contributor graph for anomalies, watch open-source and vendor maintainer changes that align with DPRK-IT-worker-style tradecraft, correlate unusual commit and access patterns with external threat intelligence on facilitator networks, and flag vendor maintainers that the broader security community has tied to DPRK clusters. Your code supply chain is only as strong as the identities behind it; we help you see those identities with the same rigor you apply to third-party software.