On February 1, 2024, Cloudflare published one of the most detailed and transparent breach disclosures the industry has ever seen. The blog post, authored by CEO Matthew Prince and CTO John Graham-Cumming, revealed that a suspected nation-state threat actor had accessed Cloudflare's internal Atlassian server (Confluence and Jira) over the Thanksgiving 2023 holiday period using credentials stolen during the October 2023 Okta support system breach.
What makes this incident worth studying is not the breach itself, which was contained to internal wiki pages and bug tracker issues, but the extraordinary level of detail in Cloudflare's disclosure and the lessons it provides about credential rotation failures and cascading trust compromises.
The Okta Connection
In October 2023, Okta disclosed that attackers had accessed their customer support case management system. The attackers used this access to steal session tokens and credentials from Okta customers who had uploaded HAR files (HTTP Archive files used for troubleshooting) containing active session cookies.
Cloudflare was one of the affected Okta customers. When Okta notified them, Cloudflare's security team rotated the credentials they knew were exposed. However, they missed some. Specifically, one service token and three service account credentials from the Okta breach were not rotated:
- A Moveworks service token
- A SentinelOne access credential
- A Smartsheet service token
- An Atlassian service account used for the internal Jira and Confluence
These oversights were, in Cloudflare's own words, the result of an incorrect assumption that the credentials were unused or non-functional. The Atlassian service account, in particular, was the one the attacker would exploit.
The Attack Timeline
November 14, 2023: The attacker began probing Cloudflare's systems using the stolen credentials. They tested access to various services and found that the Atlassian service account worked.
November 15-16: The attacker accessed Cloudflare's Confluence wiki and Jira bug tracker using the compromised Atlassian credential. They searched for specific topics related to remote access, secrets management, and Cloudflare's network architecture.
November 20-21: The attacker returned and created their own Atlassian user account, establishing persistent access that would survive a credential rotation of the original service account.
November 22 (Thanksgiving): The attacker installed the Sliver adversary simulation framework on a Cloudflare server, establishing a more robust command-and-control channel.
November 23: Cloudflare's security team detected the intrusion through their Security Information and Event Management (SIEM) system and began incident response. The attacker's access was terminated within 35 minutes of detection.
November 24-26: Cloudflare conducted a comprehensive investigation, rotating over 5,000 production credentials, physically segmenting test and staging systems, and performing forensic triage on nearly 5,000 systems.
What the Attacker Accessed
Cloudflare was specific about what the attacker did and did not access. The attacker viewed approximately 120 code repositories in Cloudflare's Atlassian environment, with 76 estimated to have been actually exfiltrated. These repositories related primarily to backup operations, network configuration and management, identity management, remote access tooling, and Cloudflare's use of Terraform and Kubernetes.
Critically, the attacker did not access Cloudflare's customer data, customer-facing systems, or the core infrastructure that operates the Cloudflare network. They were contained to the Atlassian environment and the single server where they installed Sliver.
The attacker also attempted to access a data center in Sao Paulo, Brazil, that Cloudflare had not yet put into production. Cloudflare returned all equipment from this data center to manufacturers for replacement, treating the hardware as potentially compromised despite finding no evidence of successful access.
Why the Disclosure Stands Out
Cloudflare's post-mortem was remarkable for several reasons. They published the exact credentials that were not rotated and explained why they were missed. They provided a detailed timeline with specific dates and actions. They described the attacker's movements through their environment, including what was searched for in Confluence. They acknowledged their failures directly, calling the missed credential rotation "the only production credential that was not rotated."
This level of transparency is rare. Most breach disclosures are carefully lawyered statements that minimize detail and avoid specifics. Cloudflare's approach gave the security community actionable information and set a standard for how organizations should communicate about incidents.
The Cascading Trust Problem
The deeper lesson here is about cascading trust failures. Okta was breached, which compromised Cloudflare's credentials, which gave the attacker access to Cloudflare's internal systems. Each layer of trust, from Okta's support system to the service account credentials to the Atlassian deployment, was a link in a chain.
This cascading effect is increasingly common. Organizations rely on dozens or hundreds of SaaS providers, each of which has their own security posture. A breach at any one of them can produce credentials or tokens that unlock access elsewhere. The attack surface is not just your own systems; it is the union of every system that holds credentials to your environment.
Managing this requires a systematic approach to credential inventory and rotation. When Okta notified affected customers, Cloudflare needed to identify every credential that could have been exposed and rotate all of them. The fact that they missed four out of however many they rotated shows how difficult this is in practice, even for a security-mature organization like Cloudflare.
How Safeguard.sh Helps
Safeguard.sh supports organizations in managing the kind of cascading supply chain risk that the Cloudflare-Okta incident exemplifies. By maintaining a comprehensive inventory of your software components and their dependencies, Safeguard.sh helps ensure that when a vendor discloses a breach, you can quickly assess your exposure. Our policy gate system enforces security standards across your deployment pipeline, ensuring that known-vulnerable components and compromised integrations are identified before they become attack vectors. The continuous monitoring approach means you are not relying on manual audits to catch the credential or component that slipped through the cracks.