The call that keeps me up at night is the one where a developer says "I installed that package this morning and I just saw the advisory." In the next thirty minutes, that workstation goes from a productive developer machine to a potential pivot point into source code, cloud credentials, signing keys, and production. This post walks through the forensic sequence my team runs on developer workstations when we suspect a malicious package executed on them.
Decide Fast: Live Triage or Clean Shutdown
The first decision is whether to capture live state or shut the machine down. Shutting down loses memory-resident artifacts — running processes, open sockets, unencrypted key material loaded in agents — but it stops further damage. Capturing live state preserves evidence but extends the exposure window.
My default is live triage. Most supply chain payloads are lightweight information stealers that have already exfiltrated what they wanted by the time you get the call. Keeping the machine running for fifteen more minutes while you capture state rarely makes things worse and usually makes the investigation tractable.
Tell the developer in plain English what to do: do not type passwords, do not touch the keyboard, hand me the laptop. Then start the triage.
Live Triage: The First Fifteen Minutes
The goal of live triage is to answer three questions with evidence you can defend later: what ran, what did it talk to, and what did it touch. I use a USB stick with a scripted toolkit so that the commands are identical every time.
On macOS, the triage script starts with:
/ir-toolkit/bin/sudo-run << 'EOF'
mkdir -p /ir-evidence/$(hostname)-$(date +%s)
cd /ir-evidence/$(hostname)-*
ps -axwwo pid,ppid,user,etime,command > processes.txt
lsof -nP -i > network.txt
sudo fs_usage -w -f pathname > fs_usage.log &
sleep 60
kill %1
sudo log show --last 1h --style compact > unified.log
history > shell_history.txt
cp ~/.npmrc . 2>/dev/null || true
cp ~/.bash_history . 2>/dev/null || true
cp ~/.zsh_history . 2>/dev/null || true
EOF
On Linux, substitute ss -tuanp for lsof -i and journalctl --since '1 hour ago' for the unified log query. On Windows, use Sysinternals' handle.exe, tcpvcon.exe, and Get-WinEvent through a PowerShell script with the same structure.
While the fs_usage trace is running, check for active network connections to unknown destinations. A developer laptop talking to an IP in a cloud region they have no reason to use is a strong signal. Record every remote IP, port, and process that holds the socket.
Memory Capture
Once the quick triage is done, take a memory image. On macOS with SIP this is painful; I use OSXPmem when possible or fall back to capturing swap and hibernation files. On Linux, LiME is reliable:
sudo insmod /ir-toolkit/lime.ko \
"path=/ir-evidence/$(hostname)/memory.lime format=lime"
sha256sum /ir-evidence/$(hostname)/memory.lime > memory.lime.sha256
On Windows, winpmem.exe or the commercial Belkasoft RAM Capturer produce usable images. Hash everything immediately after capture.
Memory images are huge and you will not analyze them in the live triage window. Get them into evidence storage and move on.
Disk Imaging
With memory captured, decide whether to image the disk. For a confirmed malicious package execution on a machine with access to production credentials, yes, image the disk. For a speculative investigation where the developer tab-completed the wrong package name but never ran install, maybe not. Disk imaging is expensive in time and storage; make the call deliberately.
For FileVault or BitLocker encrypted disks, capture the disk while the machine is still unlocked. Once it shuts down, you are doing a logical extraction against the unlock key, which is harder to defend forensically. dd through a USB enclosure works for most cases:
sudo dd if=/dev/disk0 of=/mnt/evidence/disk0.dd bs=4M status=progress conv=sync,noerror
sha256sum /mnt/evidence/disk0.dd | tee disk0.dd.sha256
For corporate MDM-managed machines, your MDM vendor may offer a supervised triage mode that streamlines this; check before you start.
Artifact Analysis
With evidence captured, the analysis starts. I work through a checklist in priority order.
First, the npm/pip/gem cache. The cached artifact that the developer ran is the payload you need to analyze. Compare its hash against the known-bad hash from the advisory. If they match, you have confirmed execution. If they do not match, you either have the wrong version or a republished artifact — both matter.
Second, the shell history. Every command the developer ran since the package was installed is a candidate for further investigation. I look for git push, kubectl, aws, terraform apply, and any command that touched secrets. Each of those is a potential pivot.
Third, the credential files. ~/.npmrc, ~/.aws/credentials, ~/.config/gcloud/, ~/.ssh/, ~/.docker/config.json, ~/.kube/config. Record the modification times. If the file was modified after the install, treat the credentials as compromised. If the file was only read, treat them as exfiltrated. Either way, rotate.
Fourth, the browser state. Session cookies for your source code host, cloud consoles, and IdP are often the real target of workstation compromise. Dump the browser's cookie database and check for active sessions:
# Chrome cookies on macOS
sqlite3 ~/Library/Application\ Support/Google/Chrome/Default/Cookies \
"SELECT host_key, name, datetime(expires_utc/1000000-11644473600, 'unixepoch') \
FROM cookies WHERE host_key LIKE '%github.com%' OR host_key LIKE '%okta.com%';"
Terminate active sessions for the developer's accounts on everything that matters. Session invalidation is the single most important action after credential rotation.
Deciding Fate of the Workstation
After the forensic capture, you have a choice: reimage or keep. Reimage is the default for confirmed malicious execution. Keep with cleanup is acceptable only for low-confidence cases and only with enhanced monitoring on the machine for at least thirty days.
If you reimage, do it from known-good media, not from a cached OS image that the developer built locally. The whole point is to get back to a provable baseline.
Communicating with the Developer
The developer is a witness, not a suspect, in almost every case I have worked. Treat them accordingly. Tell them what happened, what you are doing with their machine, and what they need to do next (usually: rotate personal passwords that were used in the browser, watch for phishing attempts in the next few weeks). Keep them in the loop on findings. The next time this happens they will tell you faster if they know they were treated well the first time.
How Safeguard Helps
Safeguard's endpoint integration flags a developer workstation the moment it pulls a package known to the platform as malicious, which often gives you a fifteen-minute head start over waiting for public advisory propagation. When an investigation begins, Safeguard tells you exactly which packages a workstation has resolved across every ecosystem, which versions are still in the local caches, and which credentials scopes that workstation has touched through connected cloud and SCM integrations. That context turns workstation forensics from a blank-page exercise into a guided sequence where the highest-value evidence is identified before the IR team even logs into the laptop.