When an open source project publishes a code of conduct, the security community's first reaction is often a shrug. What does a social document have to do with CVE counts or supply chain posture? It turns out the connection is direct and measurable. Projects with poorly-enforced codes of conduct lose maintainers faster, attract narrower contributor pools, and accumulate maintenance debt that shows up as security debt six to eighteen months later. This is not abstract. The XZ Utils backdoor incident in March 2024 had a social-engineering layer that targeted exactly the maintainer-burnout dynamic that codes of conduct are meant to address. This post looks at the connection with the specificity it deserves.
What a code of conduct actually does
A well-written and enforced code of conduct establishes:
- Expected behaviour in project spaces (GitHub issues, mailing lists, Slack/Discord, events).
- Unacceptable behaviour, with examples specific enough to be actionable.
- Reporting channels for violations.
- Enforcement actions and who enforces them.
- Appeals process for contested enforcement.
The Contributor Covenant (version 2.1, most widely adopted as of 2024) is the standard text. Customization is fine; removing sections is usually a warning sign.
Why this matters for security: the maintainer-retention argument
Maintainers are the bottleneck for most open source project security activity. They triage issues, review PRs, evaluate vulnerability reports, coordinate disclosure. A maintainer who burns out or leaves a project takes institutional knowledge with them. The replacement curve is slow — a new maintainer takes six to twelve months to reach the productivity and judgment of the previous one.
The things that burn out maintainers are well documented: toxic interactions, unreasonable demands, personal attacks, relentless entitled behaviour. A code of conduct that enforces expected behaviour reduces the rate of these incidents. The result is longer maintainer tenures and lower institutional-knowledge turnover.
The security implication follows directly: longer-tenured maintainers make better security decisions, handle disclosure more competently, and are less vulnerable to social-engineering attacks of the kind seen in XZ Utils.
Contributor diversity and the attack-surface argument
Projects with narrow contributor pools have narrow review perspectives. A vulnerability whose exploit is obvious to a security-focused contributor but invisible to a performance-focused contributor gets more thorough review in projects that attract both. Projects whose community norms exclude contributors on social-identity lines (consciously or not) lose the review diversity that catches bugs.
This is the harder argument to make quantitatively, but the direction is well-established in organisational research. Diverse teams find more bugs per unit effort. Open source projects are no exception.
The enforcement problem
A code of conduct without enforcement is branding. Projects with documented violation reports that went unaddressed have lost contributor trust visibly — the public responses under contentious enforcement decisions read like organisational post-mortems.
Enforcement requires:
- A named enforcement body. Not "the maintainers" generically but specific people willing to make decisions.
- Process documented in advance. How reports are triaged, evaluated, and actioned.
- Willingness to take unpopular action. Sometimes against project contributors with significant prior contributions.
The projects that do this well (Rust, Go, most CNCF graduated projects) have retained maintainers and attracted diverse contributors. The projects that don't have either absent codes or codes that exist on paper and fail in practice.
The XZ Utils connection
The XZ Utils attack in March 2024 involved a social-engineering campaign against the project's solo maintainer over multiple years. The attacker built trust by being helpful in a context where helpful contributors were scarce. The maintainer, burned out and under-resourced, handed over commit rights. The backdoor followed.
This scenario is exactly what healthy project community dynamics protect against. A project with multiple active maintainers, a healthy contributor pipeline, and enforced community norms is much harder to social-engineer. The attacker would need to compromise the community's collective judgment rather than one exhausted person.
The security community's response to XZ Utils appropriately emphasised technical controls (provenance, build reproducibility, dependency auditing). The equally important response — investing in the human and community factors that make solo-maintainer-burnout-plus-predatory-contributor scenarios less likely — has gotten less attention.
What an auditor can evaluate
When evaluating an open source project as a supply chain dependency, look at:
- Presence of a code of conduct (minimum baseline — projects without any CoC in 2024 are a yellow flag).
- Contributor Covenant v2.x or equivalent. Custom CoCs that are significantly weaker are a red flag.
- Named enforcement body or escalation path. Generic "contact the maintainers" is weak.
- Visible enforcement history. Well-functioning projects have some visible enforcement actions over time. Total absence suggests either the project has no violations (unusual) or no enforcement (more common).
- Maintainer retention. How long have active maintainers been contributing? Rapid turnover is a warning signal.
- Contributor diversity (geographically, organisationally, expertise-wise). Narrow contributor profiles correlate with review blind spots.
These indicators are informal but informative. A project that fails most of them is a higher supply chain risk than the CVE count alone suggests.
Projects with strong community governance
Three examples worth noting:
- Rust: strong Contributor Covenant adoption, active Moderation Team, public enforcement actions. The project's security culture benefits from the community infrastructure.
- Kubernetes: CNCF-standard code of conduct, documented enforcement process, diverse contributor base. Security posture has held up well through substantial growth.
- Django: long-tenured code of conduct (2013), strong community moderation, stable maintainer pipeline.
Projects with known community issues
Without naming specific projects, the pattern to watch for: a solo maintainer, long-running contentious interactions visible in public issue threads, burnout commentary from former contributors. These projects may be high-quality technically but carry community-layer supply chain risk.
Practical recommendations for consuming organisations
For any critical-path open source dependency:
- Review the project's code of conduct during initial assessment.
- Track maintainer retention over time (GitHub contribution graphs show this).
- Note any visible community health signals — conflict in threads, maintainer burnout commentary, high contributor churn.
- Budget for potential fork or replacement if community health deteriorates.
This is not about moralising. It is supply chain risk management applied to social factors that genuinely affect long-term project viability.
How Safeguard Helps
Safeguard's open source health scoring includes community governance signals — code of conduct presence and quality, maintainer retention trends, contributor diversity, visible community health. Griffin AI flags projects in the dependency graph whose community health has deteriorated visibly and correlates that with the project's security track record to produce a forward-looking risk signal. Policy gates can require minimum community health scores for newly-introduced dependencies. For organisations treating open source supply chain as a long-horizon risk management problem rather than a snapshot scan, Safeguard elevates the community-layer signal that traditional SCA tools ignore.