Incident Analysis

Blue Yonder Termite Ransomware: SaaS Supply-Chain Outage in Retail

In November 2024 the Termite ransomware group hit Blue Yonder, taking workforce-management and logistics SaaS offline for Starbucks, Sainsbury's, and Morrisons. We unpack the SaaS supply-chain blast radius.

Alex
Security Engineer
7 min read

On November 21, 2024, Blue Yonder — the Arizona-based supply-chain SaaS provider that powers workforce management, inventory, and logistics for Starbucks, Sainsbury's, Morrisons, DHL, Bayer, Intel, Renault, and roughly 3,000 other enterprise customers — disclosed that its managed-services hosted environment had been disrupted by a ransomware attack. The Termite ransomware group, a Babuk-derived brand that surfaced in mid-2024, subsequently claimed responsibility and posted approximately 680 GB of data on its leak site. Downstream, Starbucks store managers in North America spent two weeks manually calculating barista wages and scheduling shifts; Sainsbury's and Morrisons in the UK activated contingency plans to keep shelves stocked while inventory-management feeds degraded. Blue Yonder's recovery extended into mid-December 2024 for many customers, with full restoration of all managed-service tenancies stretching into early 2025 for the last affected accounts. The visible cascade across multiple top-tier customers within the same news cycle made the incident a defining event for SaaS-dependency risk management in retail and logistics, and prompted public statements from several CIOs about restructuring vendor portfolios. The incident is one of the most consequential SaaS-supply-chain outages of 2024 and a wake-up call for every retailer whose operational backbone is a single SaaS dependency. The Blue Yonder outage demonstrated that even enterprises with sophisticated cyber-resilience programmes can be brought to a partial standstill by a vendor outage outside their direct control, and that recovery timelines are governed by the vendor's own incident-response posture rather than the customer's.

Who is Termite and how did they get in?

Termite emerged in mid-2024 as a financially motivated ransomware-as-a-service brand using a Babuk-derived encryptor and a Tor-hosted leak site. The group has claimed approximately a dozen victims since launch, with Blue Yonder the largest publicly disclosed. Blue Yonder has not publicly disclosed the initial-access vector, citing an active investigation with Microsoft, Mandiant, and external counsel. Public reporting from Recorded Future, BleepingComputer, and CyberScoop indicates that Termite affiliates, like Akira and Black Basta affiliates, typically buy initial access from brokers selling stolen credentials for Citrix, Cisco ASA, and Fortinet appliances, or rely on infostealer-harvested SaaS credentials. The pattern fits Blue Yonder's environment: a large managed-services hosting footprint with thousands of customer-facing tenancies and a substantial perimeter access surface.

What did the attackers actually access?

The Termite leak-site listing claimed 680 GB of exfiltrated data including approximately 16,000 customer-contact email lists, 200,000 insurance and contract documents, and substantial internal communications and operational data. Blue Yonder publicly stated that its managed-services hosted environment was affected and that some customer files were exfiltrated, while its public cloud-hosted Azure environment was not directly affected. Downstream operational impact ran deepest where Blue Yonder hosted workforce-management and inventory data on the managed-services tier: Starbucks workforce-management for North American stores went offline, Sainsbury's and Morrisons inventory feeds degraded enough to require manual stocking, and several smaller customers reported logistics-planning interruptions through early December.

How long were they inside?

Blue Yonder has not publicly disclosed dwell time. The encryption event was first detected on November 21, 2024 and the company moved immediately into incident-response mode. Babuk-derived affiliates typically maintain dwell times of one to three weeks before encryption, with the bulk of the work happening in the first 72 hours after initial access — credential harvesting, AD enumeration, exfiltration staging — followed by a wait period to maximise the leverage of timed encryption near a peak business period. November 21 sits four days before the US Thanksgiving holiday and at the front edge of the Black Friday and Cyber Monday retail surge, which is consistent with affiliate targeting that maximises pressure to pay. The timing choice reflects affiliate sophistication: every additional hour of operational pressure during peak retail trading translates into measurable willingness to negotiate, and Termite affiliates clearly understood the operational calendar of their target.

What did existing controls miss?

Three failures define the cascade. First, the customer-tenancy isolation in Blue Yonder's managed-services hosted environment was insufficient to prevent one foothold from forcing the platform-wide outage that disrupted Starbucks and Sainsbury's simultaneously. Second, Blue Yonder's recovery posture relied on a serial restoration of customer environments rather than parallel recovery, which extended the downstream outage by weeks. Third, the customer-side preparedness for a multi-week SaaS-dependency outage was almost universally absent: Starbucks did not have a documented manual workforce-management procedure, Sainsbury's inventory-feed contingency was thin, and other Blue Yonder customers learned during the incident that their entire weekly planning cycle relied on a vendor with no agreed RTO for a ransomware scenario.

# Customer-side SaaS dependency resilience baseline
saas_dependency_baseline:
  inventory:
    every_saas_categorised_tier_zero_one_two: required
    tier_zero_max_rto_days_documented: required
    tier_zero_max_rpo_hours_documented: required
  contractual:
    breach_notification_hours: 72
    forensic_report_sharing_after_resolution: required
    customer_tenancy_isolation_attestation: required
    data_portability_export_on_request_days: 30
  operational_continuity:
    manual_fallback_procedure_documented: required
    quarterly_saas_outage_tabletop: required
    14_day_outage_cash_flow_modelled: required
  monitoring:
    saas_provider_health_status_in_siem: required
    customer_side_egress_dlp_active: required
  data_minimisation:
    saas_holds_only_attributes_needed: enforced
    annual_review_of_shared_fields: required

What should retail and logistics defenders do now?

Six steps. First, categorise every operational SaaS dependency on a tier-zero, tier-one, tier-two basis with documented RTO and RPO. Workforce management, inventory, claims, payroll, and customer-data platforms typically belong in tier zero. Second, require contractual breach-notification, customer-tenancy isolation, and data-portability commitments from every tier-zero vendor, and verify the attestations during procurement. Third, build and exercise a manual fallback procedure for every tier-zero SaaS — paper timesheets, manual inventory counts, paper claims — with quarterly drills that score time-to-fallback. Fourth, cash-flow-model a 14-day outage of each tier-zero SaaS to understand the financial exposure and align cyber-insurance coverage accordingly. Fifth, push customer-tenancy isolation forward with vendors: a Termite-class incident at Blue Yonder should not force every customer to share the same recovery window. Sixth, share TTPs and IOCs through the National Retail Federation cyber working group and the Retail and Hospitality ISAC so that successor brands replacing Termite are recognised faster.

How does this compare to other 2024-2025 SaaS-dependency outages?

Blue Yonder sits in a cluster of consequential SaaS supplier incidents. CDK Global in June 2024 took North American auto-dealership operations offline for two weeks across roughly 15,000 dealerships; Change Healthcare in February 2024 disabled claims processing for the bulk of US healthcare for weeks; Snowflake customer compromises through 2024 swept up AT&T, Ticketmaster, Santander Bank, and others. Each shared the same systemic feature: a small number of SaaS providers had become operational backbones for entire industry segments, and a single vendor compromise produced sector-wide impact. The Blue Yonder case prompted specific actions by major customers; Starbucks announced a workforce-management redundancy programme and Sainsbury's expanded its inventory-management contingency to include offline tooling. The US Federal Trade Commission and the UK Competition and Markets Authority both raised concerns about supplier concentration in retail-tech SaaS through 2025, though neither has moved to formal regulatory action. Cyber-insurance underwriters have begun pricing SaaS-dependency concentration explicitly, requiring customers to evidence manual fallback procedures and per-vendor RTO commitments before binding coverage.

How Safeguard Helps

Safeguard inventories every SaaS supply-chain dependency in your environment and continuously scores each against tier-criticality requirements, contractual breach-notification SLAs, and recovery-time commitments. Griffin AI reachability analysis surfaces which tier-zero SaaS platforms lack documented customer-tenancy isolation, which integrations hold long-lived credentials, and which data flows exceed the data-minimisation baseline. TPRM workflows score Blue Yonder-class vendors against the CISA Secure by Design pledge and verify that attestations match contractual commitments. Policy gates block new tier-zero SaaS integrations that lack a documented 14-day outage manual fallback, and ingest ransomware leak-site feeds from CISA, Recorded Future, and Mandiant so that a Termite-class incident at one customer surfaces a prioritised remediation queue for every retail and logistics peer within minutes — not the two-week scramble that defined November 2024.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.