Dev containers finally hit the mainstream somewhere around 2024. Every major editor supports them, every cloud IDE ships with a flavor of them, and most platform teams have quietly standardized on Dockerfiles or devcontainer.json as the unit of "developer environment." The productivity win is real. The security posture of the average dev container, however, is somewhere between "hopeful" and "actively negligent."
This post is the senior-engineer take on what is actually in your dev container and how to stop the parts that are not under your control from becoming the soft underbelly of your development organization.
Why does a dev container need a security model at all?
Because it has the same blast radius as a production container, minus the scrutiny. A dev container holds your source code, your cloud credentials, your SSH keys, your VCS tokens, and whatever else your tooling has logged into this week. It has a persistent connection to your VCS, it runs code from your project on every save, and in cloud IDE setups it lives on infrastructure you share with other engineers.
The gap between dev and prod security is cultural, not technical. Teams lock down production container images with SBOMs, signing, scanning, and rebuilds on CVE publication. The same team happily pulls a dev container base image tagged latest from a personal account on Docker Hub, installs a dozen VS Code extensions from a marketplace they have never audited, and clones a personal dotfiles repository over the top.
The attack surface is at least as large as production, and the compensating controls are mostly absent. An attacker who compromises a dev container gets the developer's code, their secrets, their push access, and in a cloud IDE setup, a foothold inside the platform provider's infrastructure.
What is the full list of inputs that end up inside a dev container?
More than most teams realize. The obvious one is the base image, whatever your Dockerfile or devcontainer.json references as its starting point. Pinning this to a SHA with an attested SBOM is where most teams stop, but the real attack surface is the long tail that gets layered on after.
Dev container features are reusable install scripts hosted in third-party repositories. A typical devcontainer.json references three to eight of them, for things like "install Node.js," "set up the GitHub CLI," or "configure Zsh." Each one executes arbitrary shell scripts with root in the container at build time. They are pinned to tags by default, which means they suffer the same mutability problem as GitHub Actions.
Dotfile repositories are a personal configuration pulled from each developer's GitHub profile when the container starts. Whatever scripts, aliases, or binaries the developer's dotfiles install, end up in the container, and the platform team usually has zero visibility into what is there. I have seen dotfile repos that auto-install outdated language runtimes, auto-authenticate to personal cloud accounts, and auto-sync data to third-party services.
Editor extensions are the next layer. They run with the editor's privileges inside the container, get access to the source tree, and communicate over a socket with the host. A malicious extension can read every file in the project and exfiltrate it the moment the developer connects.
Then the usual suspects: package manager dependencies, lockfile installs, and whatever postCreateCommand runs after the container boots. Each of these is a trust boundary that deserves its own audit.
How should I pin and verify the base image and features?
Treat the base image like any production image. Pin to a content-addressed digest, not a tag. Generate an SBOM at image build time, sign it, and store the attestation alongside the image in your registry. Consume the image through an internal mirror or your own registry so a supply chain compromise upstream cannot silently retag your base.
For features, pin them to commit SHAs in your devcontainer.json. The ghcr.io/devcontainers/features/* pattern supports digest pinning; use it. For features from third-party sources, fork them into your internal registry, review the installation scripts, and update the fork on your own cadence. This sounds heavy until the first time a third-party feature ships a compromised version and you realize every developer's environment pulled it overnight.
Build your own base image that pre-installs the common features for your organization, and publish it as the default. Most teams spend months adding features piecemeal when a curated internal base image would give them a smaller, more auditable surface with less per-repo configuration. The platform team owns one image with one SBOM; engineers consume it through a one-line reference.
What do I do about dotfiles without destroying developer happiness?
This is the hard one, because dotfiles are deeply personal and any policy that reads as "you cannot use your shell setup" will be bypassed within a week. The goal is to reduce the trust placed in dotfile execution, not to forbid it.
Start by documenting what dotfile injection does and does not have access to. Most developers do not realize that a dotfile script in a dev container runs as root, can read the workspace, and can modify the shell environment for every subsequent command including credentialed operations. Once people understand the scope, they self-regulate.
Provide an internal "safe dotfiles" template repository that ships the common aliases and shell theming your engineers actually want, backed by your platform team. Engineers who want more can fork it, and their forks get audited like any other dependency. Engineers who do not want to think about it get a sensible default for free.
For higher-risk environments, disable dotfile injection entirely in favor of a container-level customization layer. This trades some personalization for a hard boundary; it is worth it for environments that touch production credentials or customer data. Make the tradeoff explicit so engineers know why.
How do I monitor dev container posture continuously?
Inventory first. You cannot secure what you cannot see, and most platform teams genuinely do not know what base images, features, and extensions their fleet uses. Ship a lightweight agent or a VCS-side scanner that reads every devcontainer.json and Dockerfile in every repository and builds a central inventory. Refresh it weekly.
From the inventory, compute posture against a baseline. Which repos are on a pinned, approved base? Which features are unpinned? Which extensions are on your allowlist and which are not? A dashboard that shows posture per team is far more effective at driving change than any policy document, because it gives leads a number they care about.
Scan the running containers, not just the manifests. A dev container can drift from its declaration through postCreateCommand side effects, user-installed packages, and cached filesystem state. Periodic scans of active environments catch the drift that static analysis misses. In cloud IDE setups this is cheaper because the platform controls the runtime; in local dev you need the developer's cooperation, which is another reason to invest in a good developer experience around the scanner.
How Safeguard.sh Helps
Safeguard.sh inventories every dev container manifest, feature reference, and dotfile dependency across your organization, and applies reachability analysis to rank which ones actually touch production-adjacent secrets. Griffin AI reviews proposed changes to devcontainer.json files, generates SBOMs per environment, and flags TPRM-relevant vendors entering your developer fleet before they become standardized. Our 100-level scanning runs against the image layers your engineers actually use, and container self-healing republishes your internal base images on a clean lineage the moment a critical finding lands upstream.