Cloud Security

Oracle Cloud Tenancy Disclosure Practices: A Defender's Read

Oracle Cloud's disclosure cadence and tenancy isolation story have been pressure-tested across multiple incidents. We unpack what defenders should ask of their provider regardless of vendor.

Alex
Security Engineer
7 min read

The conversation around Oracle Cloud Infrastructure's incident disclosure practices over the last eighteen months has produced something more useful than the immediate news cycles around any specific event: a working list of questions every defender should be able to answer about their cloud provider, regardless of which provider that is. The questions are not new. The pressure to answer them clearly is. After the broad cloud incidents of 2024 and into 2026 — across multiple hyperscalers and the long-tail providers — security teams are no longer satisfied with "we are a trusted provider, please trust us." Public incident retrospectives, status-page archives, and the contractual fine print are getting read carefully. This is what a defender should actually look at.

What does tenancy isolation buy you, and where are the seams?

Cloud tenancy isolation is a property the provider asserts and the customer cannot directly verify. The provider says "your data is logically separated from every other customer's data, and our internal access controls prevent cross-tenant access." The customer's options are: trust the assertion, verify through external attestation (SOC 2, ISO 27001, FedRAMP, vendor-specific audits), and look at the seams where multi-tenant components meet single-tenant ones. The seams that have produced the loudest historical incidents across providers are: shared identity infrastructure (a compromise of the IdP layer touches every tenant), shared metadata services (the IMDS class of attacks has analogues across providers), shared scheduling and orchestration (a control plane vulnerability can affect all hosted workloads), shared support tooling (a compromise of the support-engineer console gives broad cross-tenant visibility), and shared logging or telemetry pipelines. Defenders should know which of these their provider relies on, which the provider's audit reports cover, and which the provider's contract makes explicit.

What should you expect in an incident disclosure?

A useful incident disclosure has six things: a clear statement of what happened, a timeline that distinguishes detection from disclosure, an inventory of which customers and which data categories were affected, the root cause at a level of detail that lets affected customers reason about their exposure, the remediation that has been applied, and the longer-term changes the provider is making to prevent recurrence. Disclosures that punt on the affected-customer inventory ("a small number of customers were affected") are not useful for defenders who need to know whether they were in that number. Disclosures that punt on root cause ("an authentication issue was identified") are not useful for defenders who need to know whether the issue affects their architecture more broadly. The pressure to push providers toward fuller disclosure has come from regulators, large enterprise customers, and — increasingly — public security research that reconstructs incidents from indirect signals.

# Audit OCI tenancy IAM activity for unusual cross-region access
oci audit event list \
  --compartment-id "$TENANCY_OCID" \
  --start-time "$START_TIMESTAMP" \
  --end-time "$END_TIMESTAMP" \
  --query "data[?contains(\"event-name\",'Authenticate')]"

How should you read your provider's status page archive?

Status pages are designed to communicate during incidents, not to be archived for review. Reading the archive carefully reveals patterns that the marketing-facing summary obscures: the cadence of degradations in specific regions, the recurrence of certain error classes, the gap between when an issue starts to bite customers and when it appears on the status page. For supply chain reasoning, the patterns that matter most are: identity outages (because they break credential rotation and incident response when you need it most), control-plane outages (because they prevent you from scaling response actions), and dependency outages where the cloud provider's own dependency on another provider's service caused the impact. The June 2025 Workers KV incident, for example, surfaced a dependency that many Cloudflare customers had not modeled — Cloudflare's own use of a GCP-hosted service for parts of the KV control plane. Reading status archives across providers reveals these dependencies before they bite.

What does a meaningful contract look like?

The cloud-provider contracts most enterprises sign are weighted toward the provider. There are nonetheless terms that matter for supply chain defense. Look for: a defined data classification scheme with corresponding handling commitments, security incident notification windows that match your regulatory obligations (most enterprises in 2026 need notification within 24 to 72 hours, not "as soon as practicable"), the right to audit (even if you never exercise it, the right itself reshapes how the provider treats your account), data residency commitments enforceable through technical means rather than policy alone, deletion guarantees with verification, and indemnification for breaches caused by the provider's own infrastructure or personnel. The asymmetry of negotiating power means many of these terms appear only for very large customers or in regulated-industry SKUs. The defender's job is to know which terms are in your specific contract, which are missing, and what compensating controls cover the gaps.

How do you actually verify what is in your account?

Cloud providers offer audit logging for the control plane, but defenders consistently underestimate how much they need to retain. The minimum is: every authentication event, every authorization decision, every administrative action against the tenancy, every API call to security-relevant services (KMS, IAM, network ACLs), and every data-plane access that can be logged at acceptable cost. Retention should be at least one year for compliance-relevant logs, ideally three years for forensic value. The logs should live somewhere the cloud provider cannot tamper with — a different provider, an on-prem write-once store, or a customer-managed encrypted archive. The reason for this is straightforward: in the worst-case scenario where the provider's own infrastructure is compromised, you need to be able to reconstruct what happened in your tenancy without depending on the same compromised infrastructure for the evidence.

What does cross-provider portability buy you?

The strongest defense against any single provider's incident is to not be wholly dependent on that provider. This is not the same as multi-cloud-for-its-own-sake — running every workload in two clouds doubles the operational complexity and is rarely justified. The portability that actually pays off is at the data layer (backups in a different provider, with the keys held by you), at the identity layer (a federation source that is not the same provider as the workloads), and at the incident response layer (the ability to spin up minimal services in a different provider during a prolonged outage of your primary). Designing for this from the start is much cheaper than retrofitting it during an incident, and the design discipline has the side effect of forcing clearer thinking about what your dependencies actually are.

What should a defender ask their provider this quarter?

Five questions worth bringing to your account team. Where does the provider's identity service depend on infrastructure that is not the provider's own? What is the notification SLA for security incidents and what triggers the clock? What customer-managed key options exist for the services you use, and which require provider-managed keys regardless? What is the provider's last published incident retrospective, and what changes did it produce in their architecture? Finally, what is the formal escalation path for security concerns that surface outside an active incident — because the time to find out is not when you need it. The answers form a baseline; the willingness or unwillingness to provide them is itself a signal.

How Safeguard Helps

Safeguard treats your cloud providers as third parties in the supply chain TPRM model, scoring them continuously against incident disclosure cadence, audit report freshness, contract terms, and customer-managed-key coverage of the services you actually use. Griffin AI maps your workload dependencies across providers, surfacing the indirect cross-provider couplings (your Workers using GCP-hosted dependencies, your Lambda calling Oracle-hosted services, your Azure functions relying on AWS Secrets Manager) that incident retrospectives reveal you should know about. Policy gates block infrastructure-as-code changes that introduce single-provider dependencies for tier-zero capabilities like identity, signing, and audit log custody. Continuous monitoring of provider status pages and disclosure feeds means an incident that affects you is correlated with your actual deployed inventory and the responsible teams in minutes rather than after the postmortem is published.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.