Jenkins has been around since 2011, and despite years of predictions that it would be displaced by GitHub Actions, GitLab CI, or CircleCI, the large Java enterprises I work with still run most of their builds through it. The Maven-Jenkins pairing in particular is entrenched: Jenkins has first-class Maven support through the Maven Integration plugin, the Pipeline Maven Integration plugin, and in thousands of legacy freestyle jobs that nobody wants to touch. That entrenchment is fine. The problem is that the default Jenkins-Maven configuration has accumulated twenty years of default-insecure decisions, and most teams have not revisited those decisions since the original setup.
This post walks through the specific things I change on every Jenkins-Maven environment I audit, the CVEs you should have already patched, and the configuration patterns that let you sleep through the next Log4Shell-scale event.
The CVEs you need to have already handled
Before you tune anything, confirm you are not running vulnerable versions of the pieces. Jenkins LTS 2.440.3, released in April 2024, patched CVE-2024-28159 and CVE-2024-28160 (arbitrary file read and cross-site request forgery in the Git plugin). Jenkins 2.426.3 LTS, from January 2024, patched CVE-2024-23897, the unauthenticated file read in the CLI that was actively exploited within a week of disclosure. If your controller is still on 2.414 LTS or earlier, stop reading this and go patch.
On the Maven side, Maven 3.8.1 (April 2021) was the cutoff that fixed the plain-HTTP repository issue (CVE-2021-26291). Maven 3.9.0 (January 2023) dropped support for Java 7 at runtime, which matters because several of the Maven-ecosystem CVEs only affect the old bytecode paths. Maven 3.9.6 was the current stable as of early 2024. Pin your build agents to 3.9.x or 3.8.8 at minimum. If any job on your controller invokes mvn without a version-pinned tool installation, it will silently use whatever Maven is on the agent's PATH, which is how you end up with production builds running on Maven 3.6.3 in 2024.
The Pipeline Maven Integration plugin itself has a CVE history worth noting: CVE-2023-41946 and CVE-2023-41947 (September 2023) covered XXE and path traversal, patched in 1357.v8d2b_a_3a_ef12a. If you are on an older version, the plugin has access to your job's credentials scope, so treat the upgrade as urgent.
Credential scoping is where most incidents start
Most Jenkins-Maven environments I audit fail the same way: a global credential, typically an Artifactory or Nexus deploy token, is bound to the Jenkins instance at the top level and every job can reference it. The credential is used correctly by the release jobs, but it is also readable by any job that can run a pipeline step. A compromised build script in any one job can exfiltrate the credential and use it to push poisoned artifacts to your repository.
The fix is boring and it works. Scope every credential to the folder that needs it, not the root. Use the Folder Credentials plugin (1.0.15 or later) to bind the Artifactory deploy token to only the release-job folder. Use the Credentials Binding plugin with usernamePassword bindings so the credential is injected into an environment variable for exactly the step that needs it, and nowhere else. Turn on the "restrict credentials usage" option that was added in Credentials plugin 2.6.1.
For Maven specifically, the settings.xml that Jenkins renders should reference the credential through the <server> block using Jenkins variable substitution, not through hard-coded secrets. The Config File Provider plugin 3.11.1+ does this cleanly: the settings.xml is stored centrally, it references a credential ID, and the credential ID is resolved at job runtime with the appropriate scope check.
The settings.xml rendering trap
Speaking of settings.xml: the default Maven Integration plugin behavior is to merge the job's settings.xml with the Maven installation's settings.xml. This means that a repository declaration in the installation-level settings.xml applies to every job on that agent, and most teams put their Artifactory mirror there. That is fine until someone configures a job to override the settings.xml and drops the <mirrors> block. Now the job resolves directly from Maven Central, bypassing the mirror, and anything you thought you had mirrored, quarantined, or vetted is no longer enforced.
The hardening pattern is to use a Config File Provider-managed settings.xml for every build, with a required mirror block and a mirrorOf set to *,!internal-repo. Any job that tries to override this gets flagged by a pre-build validation step. In practice, I implement this as a shared library Groovy helper that every Jenkinsfile must invoke before the first Maven step; if the settings.xml does not match the expected hash, the build fails.
Agent isolation and the mvn-agent monoculture
Jenkins agents run in a wildly varying range of configurations. The worst pattern I see is a pool of long-lived agents, each with cached Maven local repositories at ~/.m2/repository, shared across every job that lands on that agent. The local repository becomes a secondary build cache that persists poisoned artifacts across jobs. A job from a compromised branch that pulls a malicious plugin caches it under ~/.m2, and the next job on the same agent uses the cached version without re-verifying anything.
The fix is ephemeral agents. Use the Kubernetes plugin, the Docker Cloud plugin, or EC2 Fleet, and configure the agent pods to be disposed after each build. The cold-start cost is real, maybe 40 seconds of Maven dependency download on a cold agent, but the security win is worth it. If ephemeral agents are not feasible for your environment, at minimum use job-scoped -Dmaven.repo.local=${WORKSPACE}/.m2 so that each build has its own local repo that is cleaned with the workspace.
Plugin pinning and the Maven snapshot problem
Maven has two version conventions: releases, which are immutable, and snapshots, which are mutable by definition. The snapshot semantics in Maven were designed for intra-team development and they never made sense for production. A build that depends on internal-lib:1.2.3-SNAPSHOT can produce different bytes on two different days with no commit in between. For supply chain defensibility, this is intolerable.
Configure Jenkins-driven builds to reject snapshot dependencies in production branches. The Maven Enforcer plugin's requireReleaseDeps rule does this at the Maven layer; a Jenkins-level check in the Jenkinsfile's pre-deploy stage reinforces it. The same applies to plugins: requirePluginVersions plus an allowlist of approved plugin versions means that a committed change to a plugin version is reviewable, and an attacker who adds a malicious plugin version in a transitive dependency cannot quietly replace what you thought you were running.
The Jenkins CLI and SSH key discipline
The Jenkins CLI, invokable over SSH on port 50022 by default or over HTTP, is an authenticated administrative channel. It has been the source of multiple severe CVEs, including the January 2024 CVE-2024-23897 that allowed unauthenticated file reads. Even on a patched controller, the CLI is an attack surface. Disable it on the controller unless you explicitly need it, which most teams do not. The disable is two lines in the Jenkins system properties: -Djenkins.CLI.disabled=true.
Audit logging that actually helps
The Audit Trail plugin, 3.19 or later, logs administrative actions and job configurations. Most teams install it, watch it fill a disk, and then ignore the output. The useful pattern is to forward the audit log to your SIEM and set alerts on specific events: credential creation, credential binding changes, plugin installation or upgrade, node configuration changes, and script approval. Each of these is infrequent and each is a leading indicator of either a legitimate configuration change that should correlate with a change ticket, or an attacker moving through the controller.
How Safeguard Helps
Safeguard integrates with Jenkins to pull the resolved dependency tree from every Maven build, which is the ground truth for what was actually compiled and packaged rather than what the POM claimed. We flag jobs that deviate from your credential scoping policy, settings.xml that bypass the configured mirror, and builds that resolve snapshot versions for release branches. When a new CVE lands that affects a version you have deployed, Safeguard ties the CVE back to the specific Jenkins job and the specific build number that produced the release, so your remediation ticket goes to the team that owns the code rather than a central security queue. Teams that run our Jenkins agent report they close time-to-patch for critical CVEs from days to hours.