DevSecOps

ArgoCD GitOps Security Depth

A deep look at ArgoCD security in production: RBAC models, repo credentials, ApplicationSet risks, and the CVEs that have shaped the current hardening defaults.

Nayan Dey
Senior Security Engineer
6 min read

ArgoCD has won the Kubernetes GitOps space, and with that win comes a set of security responsibilities that most teams discover the hard way. It is a controller with effectively cluster-admin permissions in every cluster it manages, it holds read credentials for every git repository that contains Kubernetes manifests, and its web UI is often exposed to internal users who are not cluster operators. Each of those facts is a fine design choice in isolation. Together, they mean a compromised ArgoCD instance is a compromised fleet.

This post is about the depth of ArgoCD security, not the shallow "turn on TLS" checklist. It covers the RBAC model and how to keep it tight, the repo credential design and the leak paths, the ApplicationSet feature and the new risks it introduced, and the CVE history that has shaped the current defaults. Versions referenced are ArgoCD v2.11 and v2.12, current at the time of writing.

The permission model is not what the UI suggests

ArgoCD has three layers of permissions that most operators conflate. The first is the Kubernetes RBAC of the argocd-application-controller service account in each managed cluster, which governs what the controller can actually apply. The second is ArgoCD's internal RBAC, configured through argocd-rbac-cm, which governs what authenticated users can do inside the ArgoCD API. The third is the project-level restrictions in AppProject resources, which constrain what namespaces and resource kinds an Application can target.

The depth-in-defense principle says that all three layers should be configured, and that failure at any one layer should be caught by the next. In practice, most installations only take the internal ArgoCD RBAC seriously and leave the controller with broad Kubernetes permissions, because tightening the controller RBAC requires enumerating every resource kind that every Application installs. That is tedious but not hard. The payoff is that a CVE like CVE-2024-31989 in the Redis instance used for caching cannot be turned directly into cluster compromise if the controller only has access to the specific namespaces and kinds the team actually deploys.

A minimum-viable ArgoCD RBAC for a production team looks something like this in argocd-rbac-cm:

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-rbac-cm
  namespace: argocd
data:
  policy.default: ""
  policy.csv: |
    p, role:readonly, applications, get, */*, allow
    p, role:readonly, projects, get, *, allow
    p, role:deployer, applications, sync, tenant-a/*, allow
    p, role:deployer, applications, get, tenant-a/*, allow
    p, role:admin, *, *, *, allow
    g, team:platform-engineers, role:admin
    g, team:tenant-a-devs, role:deployer

The critical line is policy.default: "". Without it, ArgoCD falls back to role:readonly for unauthenticated access in some configurations, which has historically been a source of information disclosure findings.

Repo credentials are the crown jewels

ArgoCD needs read access to every git repository that contains deployable manifests. In a mid-sized organization, that means read credentials for hundreds of repositories, stored as Kubernetes secrets in the argocd namespace. An attacker who compromises the ArgoCD secrets has a de facto read mirror of your entire Kubernetes configuration repository. For any team that stores Helm values, Kustomize overlays, or sealed secrets metadata in those repositories, that is often enough to plan a targeted attack.

Two hardening steps matter here. First, use per-repository credentials with minimum scope. A GitHub App installation scoped to a specific set of repositories is much better than a personal access token with broad read access. Second, never use SSH keys without known_hosts entries and never accept TOFU behavior. The insecureIgnoreHostKey setting on ArgoCDRepo resources should be set to false in every environment, and known_hosts should be pinned to the actual host key fingerprints of your git provider. The ArgoCD documentation covers this but the default Helm chart does not enforce it.

CVE-2023-40029, disclosed in August 2023, was an example of how credential handling can go wrong. A specific repo-server flow allowed an attacker with ArgoCD UI access to read credentials for repositories they were not authorized to view, because the credential-check logic ran after the credential-fetch logic. The fix landed in v2.8.1 but the incident highlighted how fragile the trust model is when repo credentials are stored alongside the UI authorization logic.

ApplicationSet and the generator risk

ApplicationSet is the ArgoCD feature that lets you generate Applications from templates, with generators like the git-files generator, the cluster generator, and the pull-request generator. It is extremely useful and introduces a class of risks that did not exist in plain Application resources.

The pull-request generator in particular has been the source of several security findings. It reads open pull requests from a configured repository and creates a preview environment Application for each one. If the PR generator is configured to use a template that substitutes PR-controlled values into Kubernetes manifests without validation, an attacker who can open a pull request can cause ArgoCD to create arbitrary resources in a preview namespace. CVE-2024-21662, fixed in v2.10.3, was exactly this shape of bug: a template injection in the PR generator that allowed RCE against the repo-server.

The hardening advice here is narrow. If you use ApplicationSet, lock down the generator templates to only substitute into specific, non-structural fields like image tags and hostnames. Never let a generator-controlled value influence the namespace or project fields of the generated Application, because those control the security boundary the AppProject enforces. And pin the ApplicationSet controller version aggressively — the generator attack surface has been one of the most active bug areas in ArgoCD over the last 18 months.

Sync windows and drift detection

An often-missed hardening control is sync windows. AppProject resources support a syncWindows field that restricts when automated syncs can run. For production clusters, configuring a sync window that blocks automated syncs outside of business hours adds a meaningful delay between a malicious git commit and its deployment. An attacker who pushes a backdoor at 2 a.m. has to wait until morning, which is a window during which automated detection or a human reviewer can catch the change.

Paired with that, ArgoCD's drift detection should be set to alert on any manual changes to managed resources. The managedNamespaces feature in v2.11 makes this cleaner by letting you declare that a namespace is exclusively managed by a specific Application, so any out-of-band kubectl apply in that namespace surfaces as a drift event.

How Safeguard Helps

Safeguard integrates with ArgoCD to ingest the signed manifests and attestations for every Application sync, validate them against your policy gates, and block a sync if a required attestation is missing or a banned image appears in the rendered manifest. The platform also monitors the ArgoCD instance itself for the specific CVEs discussed in this post, flags repo credentials with overly broad scope, and correlates ApplicationSet generator configurations against known risky patterns. Combined with continuous drift detection against the policy baseline, Safeguard gives you an auditable view of whether your ArgoCD estate is actually operating to the hardening standards you believe it is.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.