The Jenkins controller had been running for nine years when I walked into the first meeting in late January 2024. Nobody knew exactly how many jobs it ran, because the answer changed depending on who you asked. The platform team said 4,200. The dashboard said 5,800. A find across the JENKINS_HOME directory returned 7,311 config.xml files. The security team, which had invited me in, simply said "too many."
The decision to migrate to GitHub Actions had already been made for non-security reasons -- licensing, operational toil, and the fact that the last person who really understood the Groovy shared library had left the company in 2022. My job was to make sure the move did not break the supply chain controls the company had spent three years building on top of Jenkins. This is a story about what broke anyway, what we caught in time, and what we would do differently.
The Starting Point
The existing Jenkins estate had accumulated security controls the way an old house accumulates paint. There was a plugin that signed all artifacts with an internal CA. There was a shared library that injected an SBOM step into every pipeline. There was a Groovy script that blocked builds if the declared base image did not match an approved list. None of these were documented. All of them mattered.
Before writing a single workflow file, we spent six weeks doing what I now call "control archaeology." For every Jenkinsfile in the top 200 repositories by deployment frequency, we traced which security steps ran, which ones were bypassed, and which ones were quietly failing open. The results were not flattering. About 31% of production pipelines had the SBOM step commented out, often with a TODO from 2021. Another 12% were signing artifacts with a key that had expired in August 2023 -- Jenkins was happy to keep signing with an expired certificate and nobody had noticed.
We documented every control we found, then sorted them into three buckets: keep and port, modernize, and retire. The "retire" bucket included two custom plugins that no longer had maintainers. The "modernize" bucket included the SBOM generation step, which had been using a tool that only supported Maven and ignored the team's newer Go and Rust services entirely.
Runner Topology Is a Security Decision
The first real security decision in any Jenkins-to-Actions migration is what kind of runners to use. GitHub-hosted runners are convenient. They are also shared infrastructure, and they come with trust implications you need to reason about explicitly.
For most workloads, GitHub-hosted runners were fine. The runner is ephemeral, it has no persistent state, and the network egress is from GitHub's IP ranges which we could filter on. For workloads that touched production secrets or built release artifacts, we used self-hosted runners in a dedicated AWS account with no direct access to anything except the build artifacts S3 bucket. We used GitHub's Actions Runner Controller on EKS, with runners that were destroyed after every job.
The temptation when migrating is to reuse the old Jenkins agents as self-hosted Actions runners. Do not do this. Those agents have years of accumulated state, cached credentials, installed tools from three different package managers, and in at least one case I found, a SSH key pair that had been used to debug a deployment in 2019 and never cleaned up. Start with clean, ephemeral runners. It costs more in compute. It saves you the next breach investigation.
Secrets Are the Hardest Part
Jenkins credentials behave differently from GitHub Actions secrets, and the differences matter. Jenkins has credential scopes -- system, global, folder, and per-job. It has the credential binding plugin that injects them as environment variables. It has the withCredentials wrapper that masks them in logs. It has, if you inherited a real Jenkins estate, at least one credential that was pasted into a build parameter as plaintext by someone in 2018 who did not know any better.
GitHub Actions has repository secrets, environment secrets, and organization secrets. It does log masking, but the masking is string-based and can be bypassed by things as simple as base64-encoding the secret. The scoping model is different enough that a direct mapping does not exist.
What we did was build a secret inventory first, from Jenkins, without touching Actions at all. For every credential, we answered three questions: which jobs use it, what does it authenticate to, and when was it last rotated. The answer to the third question was usually "never," which gave us a forcing function to rotate during migration. By the time we got to creating Actions secrets, we had already cut the secret count by 60% through consolidation and retirement of dead credentials.
For the remaining secrets, we used environment-scoped secrets tied to deployment environments with required reviewers. A workflow deploying to production cannot access the production database password until a human approves the run. This was not possible in our Jenkins setup without a third-party plugin we had stopped trusting.
Workflow Supply Chain Hygiene
The most common security regression I see in Jenkins-to-Actions migrations is uses: some-org/some-action@main. In Jenkins, your shared library was usually pinned to a branch or tag, but the library lived in your own source control. In Actions, you are importing code from GitHub.com directly, often from third parties you have no relationship with.
We wrote a policy early: every uses: reference must pin to a full commit SHA, not a tag. Tags can be moved. Commits cannot. We enforced this with a required check that runs on every pull request to the workflows directory. The check fails if any action reference is not a 40-character SHA. For our own internal actions, we also required that they live in a specific organization and have branch protection enabled on the default branch.
The second policy was about permissions. Jenkins jobs inherit the permissions of the agent, which in practice meant "a lot." GitHub Actions defaults to a broad GITHUB_TOKEN permission set that includes write access to the repository. We set permissions: {} at the workflow level and opted into specific scopes per job. Most jobs need contents: read and nothing else. Jobs that sign artifacts need id-token: write for OIDC. Jobs that comment on PRs need pull-requests: write. Nothing else.
SBOM Continuity
The Jenkins shared library generated SBOMs for Java, and only Java, and only for projects that remembered to invoke it. The migration was an opportunity to fix this. We built a composite action that runs SBOM generation for whatever ecosystem the repository uses -- detected by looking at the manifest files -- and uploads the SBOM as a workflow artifact. Every release workflow calls this composite action before publishing.
We paired that with continuous ingestion into our SBOM platform, so that the SBOM did not just get generated and archived. It got correlated against known vulnerabilities within minutes, and findings above a threshold would block the release. The key insight was that the migration was the cheapest time to add this control. Developers were already changing their pipelines. One more required step was invisible in the noise.
How Safeguard Helps
Safeguard gives migration teams the visibility they need to avoid losing supply chain controls during a CI/CD platform change. The platform ingests SBOMs from both Jenkins and GitHub Actions pipelines in parallel, so you can prove the new workflow produces equivalent or better coverage before you cut over. Policy gates enforce signing, pinning, and SBOM generation requirements as required checks, catching regressions before they reach production. Component provenance tracking works across CI systems, so the handoff from legacy to modern pipelines does not create a gap in the audit trail. For teams mid-migration, Safeguard is the constant that makes the transition measurable rather than a leap of faith.