Cloud Security

GuardDuty Extended Threat Detection: What Defenders Actually Get

GuardDuty's extended threat detection correlates findings across signals into attack sequences. We dig into where it helps, where it misses, and how to wire it into supply chain incident response.

Yukti Singhal
Security Engineer
7 min read

For most of GuardDuty's history, the service emitted point findings — a Recon:IAMUser/MaliciousIPCaller, an UnauthorizedAccess:EC2/SSHBruteForce, a Backdoor:EC2/C&CActivity.B!DNS — and left it to the SOC to stitch them together. That model worked when the average breach was a single noisy intrusion. It stopped working when the average breach became a patient credential-harvesting campaign that spent weeks moving laterally through cross-account roles, exfiltrating piecemeal through legitimate-looking S3 reads, and never tripping any individual finding hard enough to wake an on-call engineer. GuardDuty's extended threat detection, which expanded substantially through 2025 and matured into 2026, is the response. It correlates findings across resource types and time windows into named "attack sequences" with their own severity, lifecycle, and remediation context. For defenders, this is the largest change to AWS-native detection in years. It is also a feature that rewards careful adoption and punishes a check-the-box rollout.

What is an attack sequence and how does it differ from a finding?

A traditional GuardDuty finding is a single anomalous event: this IAM principal called ListBuckets from a Tor exit node, that EC2 instance resolved a known command-and-control domain. An attack sequence is a higher-order object that groups multiple findings into a narrative: this principal authenticated from an unusual location, then created a new access key, then used that key to assume a role in another account, then read several sensitive S3 prefixes, then wrote to a new bucket. The sequence has its own ID, severity score, and timeline view in the GuardDuty console. Crucially, the severity is calibrated against the sequence shape, not just the worst individual finding — a low-severity recon plus a low-severity unusual API call plus a low-severity unusual data access can compose into a critical sequence that an alert rule keyed only on per-finding severity would have missed entirely.

Which sequences are most relevant to supply chain defenders?

Three sequence shapes deserve special attention. The first is the credential-compromise-to-data-access sequence: a principal token used from a new IP, followed by unusual API discovery, followed by reads against artifact stores like ECR or S3 buckets containing build outputs. This is the shape of a stolen build credential being used to study an environment before extraction. The second is the cross-account lateral movement sequence: assume-role calls that traverse a chain of accounts an attacker has identified through GetCallerIdentity and ListAccounts, eventually landing in an account with production privileges. This is the shape of an attacker using AWS Organizations structure against you. The third is the developer-credential abuse sequence: a long-lived access key associated with a CI/CD identity used from a residential IP, followed by writes to deployment buckets or pushes to artifact repositories. This is the shape of an exfiltrated GitHub secret being used for supply chain injection.

What does GuardDuty extended detection not see?

It does not see what does not generate VPC flow logs, DNS logs, CloudTrail events, or EKS audit logs. Activity inside a Lambda function — what code paths ran, what packages were imported at runtime, what files were touched on the ephemeral filesystem — is opaque to GuardDuty unless that activity makes an API call or a network connection that crosses a logged boundary. Activity inside a container before it reaches the network is similarly opaque unless GuardDuty's runtime monitoring agent is deployed. Build pipelines that run on AWS-managed infrastructure (CodeBuild, AWS-hosted Actions runners) generate CloudTrail events that GuardDuty can reason over; pipelines that run on self-hosted runners on EC2 require runtime monitoring or external instrumentation. This is the gap defenders fill with build-provenance tooling, container scanning, and SBOM-derived telemetry. GuardDuty tells you the attack is happening; the supply chain layer tells you what it touched.

# Verify which member accounts have runtime monitoring enabled
aws guardduty list-detectors --query 'DetectorIds[0]' --output text | \
  xargs -I {} aws guardduty get-detector --detector-id {} \
    --query 'Features[?Name==`RUNTIME_MONITORING`]'

How should incident response wire into sequences?

The mistake is to treat a sequence the way you treated individual findings: as a ticket to investigate. Sequences encode a hypothesis about adversary intent across hours or days. Responding to one finding in the sequence — for example, rotating the implicated credential — without considering the rest of the chain leaves the adversary with the artifacts they collected in the earlier steps. The response playbook for a critical sequence should be: contain the principal across all involved accounts (not just where the latest finding fired), enumerate every resource the sequence touched (the GuardDuty console surfaces this), audit the resources for tampering rather than just deletion, and only then begin the credential-rotation workflow. For supply chain incidents in particular, treat any sequence that touched ECR, CodeArtifact, S3 deployment buckets, or Lambda code as an artifact-integrity event: verify signatures, rebuild from known-good source, and assume any artifact pushed during the sequence window is potentially tampered until proven otherwise.

What about cost and signal-to-noise?

Extended threat detection costs more than baseline GuardDuty because the correlation engine consumes additional event volume and runs heavier analytics. The cost is meaningful but not catastrophic — typically a percentage uplift on top of existing GuardDuty spend, not a multiplier. The bigger lever is what you do with the output. A sequence-aware SOC fires far fewer pages because the noisy mid-confidence findings that previously each generated a ticket now compose into a single high-confidence sequence event. Conversely, organizations that route every individual finding within a sequence to the queue alongside the sequence itself produce duplicate work and burn analyst time. Configure your SIEM ingestion so that sequence events take precedence and the constituent findings are linked but suppressed from the alert path. EventBridge rules with findingType predicates make this routing concise.

What should defenders do in the first thirty days?

Five steps. Enable extended detection in all accounts under the same Organizations master where you run GuardDuty, not just production. The credential compromise that becomes a production incident often started in a sandbox account where someone left an admin key in a public repo. Deploy GuardDuty runtime monitoring on every cluster and Lambda environment that holds artifact build or deployment workloads; without runtime telemetry, the sequence engine cannot see post-execution behavior. Tune severity thresholds against your environment by running a thirty-day baseline before connecting sequences to paging; the first weeks will reveal sequences your normal operations generate. Wire sequence events to a dedicated EventBridge bus and from there into your incident response platform with the full finding chain attached. Finally, rehearse: pick a recent sequence, walk through the timeline as if it were real, and verify that your runbook produces the right containment, eviction, and artifact-integrity response. The first time you do this for real is not when you want to find the gaps.

How Safeguard Helps

Safeguard ingests GuardDuty extended detection events alongside SBOM, build provenance, and artifact signature telemetry, so a sequence that touches your ECR repositories or CodeArtifact domains is automatically correlated with the affected artifacts, the downstream services that consume them, and the customers exposed to those services. Policy gates block deployments of artifacts that were pushed during an active sequence window, preventing tampered images from reaching production while incident response is in flight. Griffin AI translates sequence narratives into supply chain impact statements — "this attack sequence accessed the build outputs for three projects, two of which are in production, with the most recent affected deployment two hours ago" — so engineering leadership has the context to decide on customer notifications and rollback windows. TPRM workflows extend the same analysis to dependencies your suppliers ship, flagging vendor incidents that may have similar attack-sequence shapes against your shared cloud footprint.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.