Cloud Functions is often treated as a peer to Cloud Run in security conversations. The two services share plenty of infrastructure, but their supply-chain risk profiles are genuinely different, and treating them as interchangeable misses the risks that matter most on Functions specifically. This post walks through the supply-chain surface of Cloud Functions as it stands in 2024, with the second-generation service that went generally available in August 2022, and the places where the defaults either help you or quietly work against you.
The hidden build step
When you deploy a Cloud Function from source, Google does not just copy your code to a VM. It runs a Cloud Native Buildpacks build, using the Google Cloud Buildpacks project's language-specific builders, and packages the result as a container image stored in Artifact Registry. For a Python function, that means a Paketo-style buildpack detects the runtime, pulls dependencies with pip, and produces an image. For Node.js, it runs npm ci. For Go, it runs go build.
This is convenient, but it is also the supply-chain step that most teams fail to account for. The buildpack pulls from whatever package repositories the builder is configured to use, which for the default Google Cloud Buildpacks builder is the public Python Package Index, the public npm registry, and the public Go module proxy. If an attacker can push a compromised package to one of those and your function's dependencies resolve to it, your next deploy pulls the compromise.
The mitigation is to pin your dependencies with lockfiles. For Python, commit a requirements.txt with exact versions and hashes (--require-hashes). For Node.js, commit package-lock.json and ensure npm ci is what runs, which is the buildpack default but worth verifying. For Go, modules are hash-locked by default via go.sum, but you still need GOFLAGS=-mod=readonly in your environment to prevent implicit updates.
Even with pinning, the buildpack fetches. If you want to isolate from the public registries entirely, deploy Cloud Functions through Cloud Build with a custom build configuration that uses a private package mirror. This is more work, but for regulated workloads it is the honest answer.
The runtime identity and the same traps as Cloud Run
Like Cloud Run, Cloud Functions second generation uses the Compute Engine default service account by default, which historically carries broad roles. The same remediation applies: create a dedicated service account per function, grant it only the roles it needs, and attach it to the function via --service-account.
The trap specific to Cloud Functions is the first-generation service. First-gen functions, which are still deployable as of this writing, use a different service account (PROJECT_ID@appspot.gserviceaccount.com, the App Engine default), and migrating between the two generations is not transparent from an identity perspective. If you have a mix, audit both defaults, and schedule the first-gen migration to happen before the deprecation window closes.
Organization policy helps here. Set constraints/iam.automaticIamGrantsForDefaultServiceAccounts to enforced to stop new projects from inheriting the broad default, and audit existing grants with a Cloud Asset Inventory query that filters for bindings involving either default service account.
Secrets, environment variables, and the wrong defaults
Cloud Functions supports two mechanisms for sensitive configuration: environment variables set at deploy time (--set-env-vars) and Secret Manager integration (--set-secrets). The first is convenient and wrong for secrets. Environment variables set this way are visible in the function configuration, which is accessible to anyone with roles/cloudfunctions.viewer or roles/viewer.
Use Secret Manager. The --set-secrets flag wires a Secret Manager secret version into the function at runtime, and the service account needs roles/secretmanager.secretAccessor scoped to the specific secret. When the secret rotates, pin to a specific version or accept the small window between rotation and cold start.
The audit trail for environment variable configuration is the Cloud Functions API call UpdateFunction, and the audit log records the new environment variables verbatim. I once reviewed a project where a database password had been visible in the Cloud Audit Logs for six months because someone set it via --set-env-vars and no one had thought about the log exposure. The secret value in logs was the reason we rotated the entire database credential, not just the function's.
Vulnerability scanning is not automatic
Artifact Registry vulnerability scanning runs on container images, and because Cloud Functions produces a container image under the hood, you can scan it. But the scanning is not on by default, and the Cloud Functions console does not surface scan results.
Enable Container Analysis on the Artifact Registry repository that Cloud Functions uses (typically gcf-artifacts in the project), and wire the vulnerability findings into your normal triage flow. The findings contain the function name in the image metadata, so you can map from CVE to deployed function with a reasonable query.
Continuous scanning, which re-evaluates images as new CVE data arrives, was enabled by default on newly scanned images from November 2022 onward. For older functions, you may need to trigger a rescan by redeploying, which is awkward but the only supported path for images that were built before continuous scanning existed.
Deploy-time attestation and Binary Authorization
Binary Authorization for Cloud Functions entered preview in late 2023 and is generally available for second-generation functions in most regions as of mid-2024. The policy model mirrors the Cloud Run model: attestors signed by Cloud KMS keys, admission rules per project, and continuous validation.
For supply-chain protection, the useful attestors are the same as for Cloud Run: built-by-approved-builder, vulnerability-scanned, SBOM-generated. The wrinkle for Cloud Functions is that the image is built by Google's buildpack infrastructure, not your own Cloud Build pipeline. The built-by attestation has to come from Google's build service, which means the policy has to allowlist Google's builder identity. This is a genuine trust decision: you are trusting Google's build environment in a way you do not have to trust it for a custom Cloud Build pipeline.
The compromise I have landed on for regulated workloads is to not use the automatic buildpack path for production functions. Instead, build the container yourself in Cloud Build, generate your own attestations, and deploy the container directly with --docker-repository. You give up some of the convenience of Cloud Functions, but you get a supply-chain story you control end to end.
Trigger surface and the forgotten entry points
Cloud Functions can be triggered by HTTP, Pub/Sub, Cloud Storage, Firestore, and several other sources. Each trigger is an entry point that an attacker can reach if they control the upstream. A Cloud Storage trigger that runs on any object upload to a public bucket is a good example: a tampered object with a malicious payload can trigger the function as effectively as a legitimate workflow.
Audit trigger sources. For HTTP triggers, set --no-allow-unauthenticated unless you are explicitly building a public endpoint, and pair public endpoints with authentication at the application layer and rate limiting at the load balancer. For Cloud Storage triggers, ensure the bucket IAM is scoped and that the function's logic is robust against adversarial input. For Pub/Sub triggers, check the subscription's ack deadline and dead-letter configuration so that a poisoned message does not create a retry storm.
How Safeguard Helps
Safeguard inventories every Cloud Function across your GCP organization, tracks the builder and buildpack used to produce each deployed revision, and maps dependencies from lockfiles and the runtime SBOM against known-vulnerable and known-malicious packages. Runtime service account drift, secret exposure through environment variables, and Binary Authorization coverage gaps surface as findings against the specific function they affect. When a dependency used by a deployed function is disclosed to be compromised, Safeguard identifies the exact functions and revisions at risk and proposes a remediation path. The result is a supply-chain view of Cloud Functions that matches the fidelity you already have for your container workloads.