Lambda layers are one of those AWS features that started as a nice convenience and turned into a supply chain nightmare without anyone really noticing. They are code. They run inside your function execution environment. They have access to everything your function has access to. And most organizations I audit have a catalog of layers that nobody can fully explain, from accounts nobody remembers granting access to.
This post is about how layers become a supply chain risk, the specific patterns I have seen abused, and what a reasonable posture looks like without banning layers entirely.
What a layer actually is
A Lambda layer is a zip archive that Lambda unpacks into the /opt directory of your function's execution environment before your handler runs. That is it. If your runtime is Python and your layer contains python/lib/python3.11/site-packages/requests/, then import requests in your handler resolves to the layer's copy. If your runtime is Node and the layer contains nodejs/node_modules/lodash/, same story. If the layer contains an extensions/ directory, Lambda runs the extension as a separate process alongside your function, with access to the same execution environment.
The critical point is that layers are just as much code as your handler. They run in the same process (or an adjacent process for extensions) with the same IAM role, the same environment variables, the same network access. A malicious layer is a full function compromise.
How layers get shared, and why that is the problem
Layers can be published in one account and consumed in another via the aws lambda add-layer-version-permission API. This is the foundation of the entire layer ecosystem. Third-party vendors like Datadog, New Relic, and Lumigo publish layers in their own AWS accounts and grant lambda:GetLayerVersion to * or to specific organization IDs.
Customers reference those layers by ARN: arn:aws:lambda:us-east-1:464622532012:layer:Datadog-Extension:56. The account ID 464622532012 is Datadog's. When your function cold starts, Lambda pulls the layer zip from Datadog's account into your execution environment.
The trust model is: you trust Datadog's AWS account to not publish a malicious layer version under the same ARN. That trust is implicit, is not cryptographically verified, and persists across every cold start of every function that references the layer. If Datadog's publishing pipeline were compromised and a malicious version were published, every customer function referencing the layer ARN without a version pin would pull the malicious code on the next cold start.
To AWS's credit, layer ARNs always include a version number, and the version number is immutable. You cannot overwrite Datadog-Extension:56. But you can publish Datadog-Extension:57 with malicious content, and any function using an unpinned reference (which is most of them, because people copy ARNs from documentation) will pick up the new version on the next deploy.
The patterns I have seen in the wild
Orphaned shared layers. Organization where a team published an internal layer in 2019 to package their favorite logging library. The team disbanded. The publishing account still exists. Two hundred and eleven functions across thirty-four accounts still reference the layer. Nobody knows who can publish new versions. Nobody has reviewed the layer in four years. The layer contains a vendored copy of a library that has had six CVEs since 2020.
Wildcard permissions. A third-party vendor published a layer with lambda:GetLayerVersion granted to Principal: "*". The vendor's blog post recommended this. Customers used it. Three years later the vendor was acquired and the account ownership became ambiguous.
Extension layers nobody asked for. Some observability vendors sell layers that customers install for legitimate reasons. Those layers are extensions. Extensions run as separate processes with access to the function's environment variables, including any secrets passed that way. If the extension phones home with the environment variables, which several have done for "diagnostic telemetry," your secrets are now in a third-party observability backend. This has happened at least twice in public incidents in the last three years.
Layers with compilation artifacts. A team's internal layer contained compiled C extensions. The build was done on a developer laptop, uploaded to the layer, and referenced by production functions. The build was not reproducible, nobody had the build environment, and when a CVE in the underlying C library required a rebuild, the team spent a week reconstructing the build process.
A reasonable layer policy
Pin every layer reference by full ARN including version. No floating references. When a new version is released, you explicitly bump the reference after reviewing the diff. Yes, this is more work. Yes, it is worth it.
Restrict which accounts can publish layers you consume. Maintain an allowlist of layer publisher account IDs. Your deploy pipeline rejects function definitions that reference layer ARNs from accounts outside the allowlist. This is a twenty-line check and catches most drift.
Treat internal layers like any other code. They have a source repository, a build pipeline, an SBOM, a signature. The layer zip uploaded to Lambda is produced by the same pipeline that produces your other artifacts, with the same controls. If your internal layer publishing is a one-off script someone runs from their laptop, you have a supply chain gap.
Audit extension layers specifically. Extensions can read environment variables and make network calls with the function's egress path. Every extension layer in use needs an explicit review of what it does and what data it exfiltrates. Most vendors document this. Read the documentation. If the vendor cannot tell you what data leaves your account, do not run their extension.
Use lambda:GetLayerVersion conditions in your organization SCP. You can restrict which layer ARNs can be referenced at all, via SCP. A common pattern: allow layers from your own org accounts, from a specific list of approved vendor account IDs, and deny everything else. This prevents a developer from introducing a layer from an unknown account without explicit security review.
The rotation problem
Unlike ECR images, there is no signature verification for layers. You cannot cryptographically verify that Datadog-Extension:56 is the layer Datadog intended to publish. You can verify that it is the layer in that specific ARN at that specific version, because AWS enforces immutability, but you cannot verify the link to the publisher's intent.
The practical mitigation is that when you first adopt a layer, you download the zip, compute its SHA-256, and record the hash in your configuration management. On every deploy, your pipeline compares the hash of the currently published layer version against the recorded hash. If they differ, the pipeline fails. If the publisher releases a new version, the hash check forces you to explicitly update the recorded hash after reviewing the new version.
This is more work than it should be. It is also the only way to get any integrity assurance on third-party layers today.
How Safeguard Helps
Safeguard inventories every Lambda layer referenced by every function in your AWS organization, flags references that lack a version pin, and identifies layers published from accounts outside your approved publisher list. When a layer version changes, Safeguard compares the contents against the previous version and surfaces new dependencies, new network endpoints, and changed file hashes for review. For internal layers, Safeguard ties the layer zip back to the source commit and the build pipeline that produced it, so you can answer "where did this code come from" without a forensic exercise.