The Google and Google-Beta Terraform providers are, collectively, the most-installed infrastructure-as-code providers for GCP. If you manage any GCP environment at scale, you are almost certainly using them. They are also a non-trivial supply-chain trust anchor: a compromised provider version could rewrite IAM policies, drop KMS key versions, or exfiltrate state contents on execution.
I spent the last two months of 2024 doing a structured security review of both providers, the tooling around them, and the deployment patterns my clients use. This post is the write-up. It is neither a takedown nor a rubber stamp; it is a working assessment of where the trust is well placed, where it is thinner than people assume, and where the configuration you control can compensate.
Provenance: what you actually get
The hashicorp/google and hashicorp/google-beta providers are published to the Terraform Registry. As of the 2024 provider versions (6.x line), Terraform's provider protocol requires providers to be signed by their publisher's GPG key, and the signature is verified by Terraform during terraform init. HashiCorp's process for the Google providers involves a release workflow in the hashicorp/terraform-provider-google repository on GitHub, which builds binaries, signs them, and publishes the result.
The supply-chain chain for a provider install is therefore: your lockfile pins a version, terraform init downloads the provider archive, the signature is verified against HashiCorp's key, and the archive is extracted into the .terraform/providers tree. Each link in the chain is reasonable, but each has a failure mode.
The lockfile is the link you control. The .terraform.lock.hcl file in your repository should pin each provider to a specific version and include the expected hashes (h1: hashes are the ones that matter). Commit it, treat changes to it the way you treat changes to dependency lockfiles, and require review on any update. If someone bumps the provider and the hashes change, the review is what catches the unusual case. I have not seen a malicious provider release in the wild, but I have seen lockfile bumps that went through because nobody was paying attention.
Authentication paths and their failure modes
The providers authenticate to GCP through one of several paths. Listed in order of how often I recommend them:
- Workload Identity Federation, typically from the CI system that runs Terraform. This is the right answer for most automated pipelines.
- Application Default Credentials from a metadata server when running on GCE or GKE with a bound service account. Appropriate for human-less automation.
- Impersonation via
gcloud auth application-default loginand--impersonate-service-accountfor interactive workflows. Short-lived, least-privilege, and reviewable. - Service account key files referenced by
GOOGLE_APPLICATION_CREDENTIALS. Historical and still common; plan to migrate.
The authentication is handled by the providers through the standard GCP Go auth libraries, which I have no criticisms of. The misconfigurations are in the layer above. A few patterns I consistently see:
Long-lived service account keys stored as CI secrets with no rotation schedule. The blast radius when these leak is the union of every role the service account holds, which for a Terraform runner is often broad. Migrate to Workload Identity Federation; it is strictly better.
The impersonate_service_account provider block configured to impersonate a single service account with project-wide admin roles. This collapses the separation between modules; every module running under the same root impersonates the same privileged identity. Split the configuration so that each module-category impersonates a narrower identity (network, IAM, application).
GOOGLE_OAUTH_ACCESS_TOKEN exported with a manually generated token, used to avoid the impersonation rigmarole. These tokens end up in shell history, CI logs, and the occasional Pastebin. Do not do this.
State handling is half the security story
Terraform state contains, quite literally, the current inventory of your infrastructure. For GCP, that means resource names, IAM policies, KMS key references, VPC CIDRs, and occasionally literal secret values that leak into state through resources that include sensitive attributes. Protecting state is as important as protecting the provider itself.
The default state backend for GCP workloads is the gcs backend. Configure it with a dedicated bucket per environment, with object versioning enabled, customer-managed encryption keys (CMEK) through Cloud KMS, uniform bucket-level access, and Public Access Prevention enabled. IAM on the bucket is restricted to the service account that runs Terraform (with roles/storage.objectAdmin scoped to the bucket) and a narrow human operator group for emergency access.
Enable prevent_destroy on the state bucket itself, and route Cloud Audit Logs for the bucket to a security sink. The event google.storage.objects.delete on a state object is the signal for a state manipulation attempt; it should not happen in normal operation because versioned buckets retain old objects rather than deleting them.
State locking uses a separate GCS object, and lock contention is a common cause of half-applied state. If a lock is orphaned after a pipeline crash, force-unlocking it requires roles/storage.objectAdmin, which you should grant sparingly and audit any time it is used.
The provider surface itself
The Google providers export several hundred resource types. The ones most relevant to security review are:
google_iam_policy, google_project_iam_policy, google_*_iam_binding, and google_*_iam_member. The policy and binding resources are authoritative and replace the entire policy; the member resource adds a single binding. Mixing authoritative and non-authoritative resources on the same policy is a common source of IAM drift, and it has appeared in incidents where a member was silently dropped by a policy resource that did not know about it. Use one pattern per policy, document it, and lint it.
google_kms_crypto_key and google_kms_key_ring. Keys cannot be deleted, only scheduled for destruction, which is a feature. However, a Terraform destroy on a crypto key schedules destruction with the default 30-day delay, and during that window the key is unusable. Set destroy_scheduled_duration explicitly and consider setting prevent_destroy = true on production keys.
google_service_account_key. This resource creates a JSON key file and stores its private key in state. That is a secret in state, plain and simple. Avoid this resource entirely; use Workload Identity Federation or impersonation.
google_binary_authorization_policy. The policy is a project-wide resource. Changes to it should go through a separate approval flow, and I CODEOWNERS the file that holds it to the platform-security team.
Beta provider considerations
The hashicorp/google-beta provider mirrors the google provider but exposes resources that target the beta GCP APIs. The resources in beta are less stable; API contracts can change, and fields may be removed. From a security standpoint, the beta provider is fine, but it is a commitment to tracking the deprecation churn.
Isolate beta resources in their own modules, and review the upstream GitHub repository changelogs on each provider bump. Breaking changes are flagged, and the review should evaluate whether a beta resource has graduated to general availability and should be migrated to the stable provider.
Static analysis and policy as code
Run static analysis on the Terraform configuration before it reaches state. Checkov, tfsec, and the Google-supported Terraform Validator all catch classes of misconfiguration that the providers will happily accept. The rules worth enforcing: no public Cloud Storage buckets, no service account keys as resources, no allUsers or allAuthenticatedUsers IAM bindings on sensitive resources, KMS encryption on GCS and BigQuery, and Binary Authorization enabled on production projects.
Layer on organization policies through google_org_policy_policy or the legacy google_organization_policy resource. The organization policy constraints are enforced by GCP regardless of what Terraform says, so they are the durable fallback if Terraform drift creeps in.
How Safeguard Helps
Safeguard inventories the Terraform provider versions and module sources in use across your organization, checks lockfile integrity against known-good releases, and correlates the deployed GCP resources with the Terraform configurations that produced them. Authentication patterns, state bucket configuration, and use of high-risk resources like service account keys surface as findings against the specific project and module they affect. When a provider release or module dependency is disclosed to be compromised, Safeguard identifies the exact pipelines and state resources at risk and provides a remediation path. The result is a Terraform program where the supply chain of your infrastructure-as-code is as well understood as the supply chain of your application code.