Vulnerability Analysis

VMware ESXi CVE-2024-37085 Auth Bypass by Ransomware

CVE-2024-37085 abuses ESXi's AD domain join to grant admin via a specially named group. Exploitation by Akira and Black Basta, detection, and fix.

Shadab Khan
Security Engineer
8 min read

VMware ESXi CVE-2024-37085 is a CVSS 6.8 authentication bypass Broadcom patched on June 25, 2024, after Microsoft Threat Intelligence reported active exploitation by multiple ransomware groups. The bug is that an ESXi host joined to Active Directory automatically grants full administrative privileges to any user in a domain group named "ESX Admins," and ESXi does not verify that the group was created by an administrator. Any attacker with domain rights to create groups can create "ESX Admins," add themselves, and immediately hold root-equivalent access on every domain-joined ESXi host. Microsoft attributed exploitation to Storm-0506 (Black Basta), Storm-1175, Octo Tempest, and Manatee Tempest.

What is the default behavior that creates the vulnerability?

The default behavior creating the vulnerability is that ESXi versions 7.0 through 8.0 automatically map the "ESX Admins" AD group to the local Admin role when the host joins a domain, and ESXi does not validate whether the group exists or was created legitimately. The mapping is hardcoded behavior from an era when VMware's opinionated default was "make domain integration easy for customers." Any user with the AD right to create groups (which is significantly broader than domain admin in many environments) can create "ESX Admins," join it, and own every ESXi host in the domain within minutes.

The bug is not a memory corruption or parsing flaw. It is a logic flaw in the default privilege mapping combined with a delegation model that lets low-privilege AD users create groups. This makes it particularly durable: there is no gadget chain to mitigate, only configuration and policy.

VMware's patch changed the default behavior to not automatically map "ESX Admins" and to let administrators explicitly configure the mapping. It also added verification that the group's creation path matches expected administrative activity. Customers must apply the patch to close the behavior, and for unpatched hosts the only workarounds are to change the AD delegation model or to configure a non-default admin group.

How does the exploitation chain work in observed incidents?

The exploitation chain in observed incidents works by the attacker gaining any foothold in AD that can create security groups, creating "ESX Admins," adding themselves, and logging into ESXi hosts directly. The full chain as Microsoft documented it:

  1. Attacker compromises any domain user account, typically through phishing or a purchased credential from an infostealer log
  2. If the account does not already have group-creation rights, attacker escalates through a secondary technique (misconfigured OU delegation, nested group memberships, service account abuse)
  3. Attacker runs New-ADGroup -Name "ESX Admins" -GroupScope Global -GroupCategory Security
  4. Attacker adds their controlled account as a member
  5. Attacker authenticates to the ESXi host via SSH or the vSphere API with their domain credentials
  6. ESXi evaluates the login, sees the user is a member of "ESX Admins," grants full admin privileges
  7. Attacker powers off VMs, mounts their datastores, and encrypts VMDK files directly

Akira and Black Basta affiliates preferred direct VMDK encryption over in-guest encryption because it is faster (one target per datastore instead of one per VM) and bypasses EDR in the guests entirely. An encrypted datastore takes down every VM on it, which for consolidated vSphere clusters means the entire virtual infrastructure in minutes.

Why is this so effective against mature organizations?

This is effective against mature organizations because the prerequisite (creating an AD group) is a capability that many environments delegate widely without realizing it grants ESXi admin. The Active Directory permission model is notoriously complex, and shops that delegated "manage groups in the Corp OU" to help desk staff or automation accounts inadvertently created a path to ESXi root.

Additional factors amplifying impact:

  • ESXi hosts are frequently monitored less than Windows servers (fewer EDR coverage, less log detail)
  • Direct VMDK encryption bypasses guest-level EDR because the encryption happens at the hypervisor
  • Ransomware operators established ESXi encryption as a standard playbook, with tailored binaries (Linux-based encryptors that run on ESXi's restricted shell) in their toolkits
  • Recovery from VMDK encryption is slower than from guest-level encryption because every VM on a datastore must be restored simultaneously

Sophos and Mandiant both noted that the mean dwell time in ESXi-compromise cases was 7 to 14 days, which gave attackers time to locate, exfiltrate, and stage encryption without detection.

Which environments are most exposed?

Environments most exposed are VMware ESXi clusters joined to Active Directory for centralized authentication, running firmware versions before the June 25, 2024 patch, and with broad AD group-creation delegation. Specific risk factors:

  • vSphere 7.0.x through 7.0.3 before build 24022510
  • vSphere 8.0.x before build 24022510 (8.0.2 patch level)
  • ESXi hosts where AD domain join is enabled for SSH/console access
  • AD domains where any non-tier-0 account can create groups in any OU

Environments that use vSphere SSO without joining ESXi hosts directly to AD are not exposed to this specific CVE because the "ESX Admins" lookup does not happen. Many mature organizations run in this mode and were unaffected.

Inventory steps:

  • Get-VMHost | Select Name, Version, Build in PowerCLI to find hosts below the patched builds
  • Audit ESXi host authentication configuration for direct AD join
  • Query AD for any existing "ESX Admins" group and verify its membership and creation date
  • Check AD group-creation delegation across all OUs

The existence of an "ESX Admins" group your IT team did not create is a five-alarm alert. If you find one during inventory, treat it as an active intrusion until proven otherwise.

What detection works in AD and on the hypervisor?

Detection works by monitoring AD group lifecycle events and ESXi authentication logs. AD signals:

  • Event ID 4727 (global security group created) where the group name matches ESX Admins
  • Event ID 4728 (member added to security group) for the "ESX Admins" group from non-administrative accounts
  • Event ID 4732 / 4733 (member added/removed from local group) correlating with remote ESXi activity

Honeypot detection is highly effective here: create an empty "ESX Admins" group in AD, alert loudly on any member-add event for it, and treat any add as incident response. Several organizations that did this after the June 2024 disclosure caught intrusion attempts weeks before they escalated.

ESXi host signals:

  • SSH logins from domain accounts outside approved administrator lists
  • New sessions against the vSphere API from unusual source IPs
  • Datastore operations (unmount, reformat, large sequential writes) outside change windows
  • ESXi hostd.log entries showing authentication from AD accounts that should not have privileges

Post-exploitation signals include ESXi service changes (SSH enablement on hosts that normally have it off, firewall rule modifications) and ransomware-staged files in /tmp or /scratch before encryption begins.

What does the full remediation and hardening plan look like?

The full plan is patch, reconfigure, and re-delegate. Patch every ESXi host to the June 2024 build or later. Reconfigure the ESXi-to-AD mapping to use a non-default group name that only your vSphere administrators can touch, and explicitly remove the "ESX Admins" mapping. Re-delegate AD group creation so that creating security groups in the OU that ESXi trusts requires explicit administrative authorization rather than delegated rights.

Concrete steps:

  1. Apply ESXi patches through vSphere Lifecycle Manager
  2. Set the advanced configuration Config.HostAgent.plugins.hostsvc.esxAdminsGroup to a non-default value
  3. Audit AD group creation permissions across all OUs and tighten them to tier-0 administrators only
  4. Enable vSphere audit logging forwarded to a SIEM
  5. Migrate ESXi authentication to vCenter SSO and remove direct AD join where operationally feasible
  6. Enforce MFA for all vSphere administrator accounts

Broader hardening beyond this CVE:

  • Air-gap vSphere management networks from user VLANs
  • Use vCenter-only administration rather than direct ESXi logins
  • Enforce datastore snapshots and immutable backups
  • Monitor ESXi datastores for anomalous file write patterns

How Safeguard.sh Helps

Safeguard.sh identifies every ESXi host in your environment running below the patched June 2024 builds through 100-level dependency scanning against both vSphere inventory exports and direct hypervisor queries. Reachability analysis cuts 60 to 80 percent of noise by distinguishing hosts that are domain-joined and exposed to the "ESX Admins" logic from those using vSphere SSO, so patch and reconfiguration priority focuses on genuinely exposed instances. Griffin AI autonomously generates vSphere configuration patches, AD delegation audit reports, and SIEM detection rules tailored to your forest layout, while container self-healing rebuilds vCenter-adjacent container workloads against patched bases. SBOM generation and ingest provide unified visibility across ESXi firmware, vCenter appliances, and related management tooling, and the TPRM workflow surfaces managed service providers whose vSphere infrastructure affects your risk posture.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.