Industry Analysis

MITRE ATT&CK Meets SSDF: A Mapping

ATT&CK describes how adversaries operate; SSDF describes how to build software that resists them. Here's how to map adversary techniques to secure-development tasks so your threat model drives real engineering change.

Nayan Dey
Senior Security Engineer
7 min read

Two Frameworks, Opposite Ends of the Kill Chain

MITRE ATT&CK and the NIST Secure Software Development Framework sit at opposite ends of the same problem. ATT&CK describes what attackers actually do — catalogued across 14 tactics, hundreds of techniques, and thousands of procedures observed in incident response. SSDF describes what engineers should do when building software — four practice groups, 19 practices, 42 tasks, each tied to an implementation example.

Neither framework crosswalks cleanly to the other. ATT&CK is observation-oriented (what was seen in the wild); SSDF is prescription-oriented (what good practice looks like). But if you're building a threat-informed AppSec program, you need to translate between them — specifically, between the techniques adversaries use to compromise software supply chains and the SSDF tasks that would prevent or detect those techniques.

This post builds that translation. It's not an official MITRE-NIST mapping; it's a working crosswalk based on engagement experience, using ATT&CK v14 technique IDs and NIST SP 800-218 v1.1 task IDs.

The ATT&CK Supply Chain Tactics

ATT&CK's initial access tactic (TA0001) includes T1195 Supply Chain Compromise, with three sub-techniques:

  • T1195.001 — Compromise Software Dependencies and Development Tools
  • T1195.002 — Compromise Software Supply Chain
  • T1195.003 — Compromise Hardware Supply Chain

Beyond the initial-access point, supply chain attacks span other tactics:

  • T1199 Trusted Relationship — adversary has legitimate access through a trusted third party.
  • T1078 Valid Accounts — compromised developer or CI/CD service accounts.
  • T1554 Compromise Client Software Binary — modification of client-side software.
  • T1608.001 Upload Malware — staging of malicious packages in public registries (typosquatting, dependency confusion).
  • T1588.001 Obtain Capabilities: Malware — acquisition of malicious capability often co-occurring with supply chain staging.

Each of these has SSDF-level prevention opportunities. The mapping below walks through the highest-value connections.

T1195.001 (Software Dependencies) ↔ SSDF PW.4

T1195.001 is where dependency-confusion attacks, typosquatting, and compromised maintainers live. The Colors.js self-sabotage, the UA-parser-js credentials theft, and the XZ-utils backdoor all map here (with overlap with T1195.002 and T1078).

The SSDF task that directly addresses this is PW.4 (Reuse Existing, Well-Secured Software When Feasible Instead of Duplicating Functionality). PW.4.1 requires acquiring software only from trusted suppliers; PW.4.2 requires reviewing acquired software for security issues; PW.4.4 requires maintaining a list of approved suppliers.

Further depth comes from PO.3 (Implement Supporting Toolchains), specifically PO.3.2 (deploy security tools to the SDLC). SCA tooling, SBOM generation, and malicious-package detection all sit here.

Concrete control pairings:

  • T1195.001 ↔ PW.4.1 + PW.4.2 — registry pinning, signature verification, SCA scanning.
  • T1195.001 ↔ PO.3.2 — automated malicious-package detection, dependency confusion defenses (private registry, scoped namespaces).
  • T1195.001 ↔ RV.1.1 — intake of registry advisory feeds (npm, PyPI, rubygems security advisories).

T1195.002 (Compromise Software Supply Chain) ↔ SSDF PS + PW

T1195.002 covers the adversary compromising software before it reaches the victim — the SolarWinds/Sunburst pattern, where the adversary modifies build output. This is the hardest supply-chain scenario to prevent because the attack enters the trust boundary of the victim's own vendors.

The SSDF tasks that address it span PS (Protect the Software) and PW (Produce Well-Secured Software):

  • PS.1 (Protect All Forms of Code from Unauthorized Access) — source-code repository protection, branch protection, CODEOWNERS enforcement, required review. Prevents adversary modification if the adversary has valid-account access.
  • PS.2 (Provide a Mechanism for Verifying Software Release Integrity) — code signing, reproducible builds, provenance attestation. This is the defensive-in-depth layer that lets a downstream consumer verify the artifact they received matches what was built.
  • PS.3 (Archive and Protect Each Software Release) — retention of exact release artifacts for forensic analysis, tied to the SBOM for that release.
  • PW.6 (Configure the Compilation, Interpreter, and Build Processes to Improve Executable Security) — hardened build infrastructure that itself resists tampering.

For a downstream consumer of third-party software — a customer of the compromised vendor — the SSDF tasks are PW.4.2 (review before incorporation) and RV.3 (identify root causes of incidents and adjust processes). This is where VEX documents and SLSA provenance earn their keep.

T1199 (Trusted Relationship) ↔ SSDF PO

T1199 is the adversary pivoting through a trusted third party's access. The Target breach through an HVAC vendor is the canonical example; in software context, it's a consultant, contractor, or MSP with legitimate access being compromised.

SSDF tasks that address trusted-relationship risk:

  • PO.1.1 — define security requirements for software development, including for third-party developers.
  • PO.2.1 — create roles and responsibilities for developers, including third-party developers.
  • PO.2.2 — provide training to personnel with roles in secure software development, including third parties.
  • PO.5 (Implement and Maintain Secure Environments for Software Development) — the development environment itself is hardened, so that a compromised trusted party can't pivot.

T1078 (Valid Accounts) in Developer Context ↔ SSDF PO.5 + PS.1

T1078 covers adversaries using legitimate credentials. In software development context, this is the compromised developer laptop, stolen GitHub personal access token, or phished CI/CD service account. The CircleCI 2023 incident, the Codecov uploader compromise, and several recent npm maintainer takeovers fit here.

Mapping:

  • T1078.001 (Default Accounts) ↔ PO.5 — remove default credentials, enforce password policies.
  • T1078.003 (Local Accounts) ↔ PS.1 — developer laptop controls, disk encryption, MFA for local accounts.
  • T1078.004 (Cloud Accounts) ↔ PO.5 + PS.1 — MFA, short-lived credentials, OIDC for CI/CD in lieu of long-lived secrets.

T1608.001 (Upload Malware) ↔ SSDF PW.4 + PO.3

Typosquatting and dependency confusion fit here more precisely than T1195.001. The adversary uploads a package with a name similar to a legitimate one (typo) or matching an internal package in a public registry (confusion), and waits for a victim to install.

  • T1608.001 ↔ PW.4.1 — approved suppliers, private package registry, pinned versions.
  • T1608.001 ↔ PW.4.2 — SCA scanning with malicious-package detection.
  • T1608.001 ↔ PO.3.2 — automated tooling for dependency-confusion detection.

Operationalizing the Mapping

The mapping matters only if it drives engineering decisions. The pattern I use:

  1. Threat modeling references ATT&CK techniques. Every threat identified in design review carries one or more ATT&CK IDs.
  2. Controls reference SSDF tasks. Each preventive, detective, or responsive control carries an SSDF task ID.
  3. The design review document makes the connection explicit. The threat "compromise of direct npm dependency" (T1195.001) is mitigated by "SCA scanning on every PR with malicious-package feed" (PW.4.2 + PO.3.2) and "dependency pinning with lockfile integrity verification" (PW.4.1).
  4. ATT&CK Navigator layers visualize coverage. Techniques with SSDF task coverage are highlighted; uncovered techniques become backlog items.

This gives the AppSec program a traceable line from adversary behavior to engineering practice, which is the definition of threat-informed defense.

Caveats

ATT&CK is descriptive, not exhaustive. New techniques get added every version. SSDF is prescriptive, not operational — tasks describe what to do, not how. The mapping between them captures a moment; it needs to be revisited when either framework updates.

Also, some ATT&CK techniques have no meaningful SSDF counterpart. T1078.002 (Domain Accounts) is an infrastructure concern, not a software-development concern. Don't force a mapping where none naturally exists.

How Safeguard Helps

Safeguard surfaces supply-chain ATT&CK patterns — dependency confusion, malicious-package publication, compromised maintainer behavior — and maps findings to SSDF tasks PW.4, PO.3, and PS.1 in the finding detail view. Each vulnerability carries technique context where available, so AppSec teams can trace from a specific finding back to adversary behavior and forward to preventive control. The policy gate language supports both ATT&CK technique IDs and SSDF task IDs, letting teams build threat-informed rules directly into their CI/CD. The result is threat modeling that emits real controls rather than documentation.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.