Most incident retrospectives I have read in the last few years eventually circle back to the same place: an engineer's laptop. The ones where a production database got popped, the ones where cloud credentials ended up on GitHub, the ones where a CI pipeline suddenly started pushing a new binary that no one recognized. The laptop is, for a lot of organizations, the softest target with the widest blast radius. It holds secrets to everything, and it is protected by exactly as much security as the engineer and their configuration remember to apply.
The industry has spent the last decade building excellent vaults for production secrets. The same industry has been less thoughtful about the copies of those secrets that live in ~/.aws/credentials, in ~/.docker/config.json, in ~/.kube/config, in Git helper caches, in local .env files, and in a thousand other places that a well-resourced attacker will enumerate before they type a single command against production.
The real exfiltration paths
Start by being specific about what we are defending against. Generic endpoint malware is one threat model; a targeted supply chain attack is another. Both exfiltrate dev machine secrets, but through different paths. A comprehensive view has to cover:
Malicious dependency install. An engineer runs npm install or pip install against a repo, and a compromised or typosquatted package executes arbitrary code during install. The code reads ~/.aws, ~/.ssh, ~/.netrc, environment variables in the shell, and shell history, and posts them out. This is the category that has produced the biggest publicly known incidents.
Malicious IDE extension. VS Code, JetBrains, and other IDE extensions run with full user privileges. A compromised extension has direct access to the same secret files the engineer does. The Trust-on-First-Use model of extension installation is a structural weakness.
Browser-based theft. Developers live in browsers, which hold cloud console session cookies, GitHub tokens in extensions, and sometimes secrets in the form of pasted strings in AI chat tools. A compromised browser, extension, or session-hijacking attack gets them all.
Physical theft. Laptops still get stolen. Disk encryption helps, but not if the machine is stolen unlocked, and not if the attacker has time to attempt cold-boot or evil-maid attacks.
Insider exfiltration. An engineer leaves, or is compromised via social engineering, and copies what they have. Most secrets live in human-readable form on developer machines specifically because making them readable was the convenience that motivated their presence there.
Each of these paths has different mitigations. A mitigation that covers all of them is called a vault, and we will come back to it.
The credential files attackers actually target
There is a well-known hit list that stealers query. It is not exhaustive but it is predictable. AWS credentials in ~/.aws/credentials and environment variables. SSH private keys in ~/.ssh/. Git credentials in ~/.git-credentials or the credential helper cache. Kubernetes configs in ~/.kube/config, which often contain long-lived bearer tokens. Cloud SDK caches (~/.config/gcloud, ~/.azure). Docker config at ~/.docker/config.json with registry credentials. Browser profile databases with session cookies. Shell history, which reliably contains at least one secret exported inline. .env files in project directories.
Every one of these locations is readable by the logged-in user without additional privileges. An attacker who runs code as the user runs code with full access to all of them. Operating system sandboxing does almost nothing here.
Short-lived credentials are the structural fix
The only mitigation that materially changes this threat model is moving to credentials that are too short-lived to be worth stealing. An AWS credential that is valid for an hour and tied to a session started five minutes ago is a much smaller problem than a long-lived access key. A GitHub token that expires daily and can be regenerated from SSO is a much smaller problem than a personal access token with no expiry.
The practical moves that get you there:
- Enforce SSO for cloud console access and use short-lived STS credentials via
aws sso loginor equivalent for CLI work. Forbid long-lived IAM user access keys for humans. - Use short-lived tokens for source control, issued via SSO or device auth, with refresh flows that require device reattestation.
- Move Kubernetes access to OIDC with short-lived tokens issued against the user's SSO identity. Long-lived service account tokens in kubeconfigs are a historical accident; treat them as one.
- Use workload identity federation (AWS IAM Identity Center, GCP Workload Identity Federation, GitHub OIDC) for CI rather than provisioning static keys to pipelines that could be stolen during repo compromise.
None of this fully prevents exfiltration. It dramatically shortens the window during which a stolen credential is useful, which changes the economics of the attack.
Hardware keys and enclaves
The next lever is keeping the credential material outside the filesystem. Hardware security keys (YubiKey, Titan) protect SSH and Git signing operations by keeping the private key on the device itself. Platform enclaves (Secure Enclave on macOS, TPM on Windows, Titan M on ChromeOS) serve the same purpose for other keys. 1Password, Bitwarden, and similar password managers can store SSH keys and serve them to agents without ever writing them to disk.
The goal is that a stealer reading ~/.ssh/id_ed25519 finds either nothing or a reference to a key that cannot be used without the hardware device. This is a meaningful upgrade in threat resistance and, once configured, is invisible in day-to-day work.
The dependency install problem
Short-lived credentials and hardware keys do not fix the npm install problem. The mitigation here is different. Developer workstations should run builds and dependency installs in environments that do not have access to the engineer's personal secrets. Concretely: containers, devcontainers, remote development environments (Codespaces, Gitpod, JetBrains Space), or properly isolated user accounts. When npm install runs inside a container that has no access to ~/.aws, the worst case of a malicious package is dramatically bounded.
This is a workflow change, and it has friction. It is also the change that has reduced supply chain exposure most visibly in the organizations I have seen adopt it.
Detection for the hard cases
You cannot prevent everything. When prevention fails, detection is what limits damage. The signals worth monitoring:
- Unusual use of developer credentials from new IPs or unusual times. Cloud providers expose this as part of their audit logs; feed it into your SIEM.
- Git activity that deviates from developer norms, such as cloning large numbers of repositories in a short window, pushing branches with unusual patterns, or creating access tokens programmatically.
- CI pipeline activity from developer credentials, especially outside business hours or from residential IPs.
- Secret scanning on everything: repositories, public paste sites, public npm and PyPI packages. When a credential leaks, the window between leak and exploitation is often only hours.
Detection is imperfect, but the difference between detecting a credential theft in an hour versus a week is typically the difference between a contained incident and a breach.
The cultural piece
The technical mitigations work when they are paired with developer buy-in. That requires acknowledging the friction cost honestly. Short-lived credentials do mean aws sso login fails at inconvenient moments. Hardware keys do mean tapping a YubiKey several times a day. Remote development environments do mean a slower inner loop for certain workflows. These are real costs, and a security program that pretends they are not will be routed around.
The programs that win are the ones that measure the friction, reduce it where possible, and show the tradeoff honestly. Developers are generally willing to accept well-scoped friction when the reasoning is concrete and the alternative is being the engineer whose laptop caused the next major incident.
How Safeguard Helps
Safeguard gives security teams visibility into where developer credentials are exposed across repositories, pipelines, and package ecosystems, and correlates those exposures against known supply chain threats targeting developer machines. It tracks dependency risk in the repos engineers pull, flags malicious or compromised packages that put install-time secrets at risk, and maps the blast radius of a compromised laptop against the actual systems those credentials can reach. That turns developer machine risk from an abstract anxiety into a managed part of the software supply chain posture.