For most of commercial aviation's software history, airworthiness was synonymous with DO-178C. If your code met the right Design Assurance Level, traced to requirements, and passed structural coverage, you shipped. Security was a station on the compliance map, not the map itself. That has changed. RTCA DO-326A, Airworthiness Security Process Specification, and its companion DO-356A, Airworthiness Security Methods and Considerations, now define what regulators expect when assessing whether an aircraft system can withstand intentional unauthorized electronic interaction. The FAA accepts DO-326A as the means of compliance for Part 25 special conditions on cyber, and EASA has adopted it as AMC 20-42. If you build avionics, in-flight entertainment, connected cabin systems, or engine controllers, the standard is no longer optional.
What is new and, for supply chain teams, genuinely difficult, is that DO-326A treats the aircraft as the asset and the software supply chain as an attack surface that must be characterized, bounded, and justified. The days of accepting a vendor's binary with a DO-178C certificate and moving on are ending.
From airworthiness to airworthiness security
DO-178C answered the question "does this software behave as intended." DO-326A asks a different question: "can an adversary force this software to behave in ways that compromise safety, and can you prove it cannot happen through any realistic attack path." The answer has to account for every component that touches the system, including open-source libraries pulled into avionics stacks, firmware from chipset vendors, RTOS kernels licensed from third parties, and development tooling used to build the binaries that go on the aircraft.
The process flow is structured around a Plan for Security Aspects of Certification (PSecAC), a security risk assessment, a security architecture, and a security verification activity. Every one of these deliverables references assets, threat conditions, and threat scenarios. A threat scenario is not abstract; it is a concrete sequence of adversary actions that could lead to a failure condition. If your weather radar firmware links against a third-party TLS library, and that library has an unresolved memory corruption vulnerability, that is a threat scenario input, not a footnote.
SBOMs as a certification artifact
DO-326A does not mandate SBOMs by name, but the evidence required to discharge a security risk assessment is, in practice, a software bill of materials with provenance. Certifying authorities have begun to explicitly request component inventories during security substantiation reviews. The reasoning is straightforward: you cannot claim coverage over threat scenarios if you cannot enumerate what is in the binary.
The aviation SBOM is not the relaxed SPDX export a web team might produce. It has to include:
- Every component, including transitive dependencies and build-time tooling that contaminates the output.
- Hashes tied to the exact artifact flown on the aircraft, not the latest release on a package registry.
- Cryptographic provenance that links the source repository, the build environment, and the signed binary.
- A change history that explains why each component is at its specific version, because in aviation, "we upgraded because a bot opened a PR" is not a valid rationale.
Teams that manage this well treat the SBOM as a configuration item under DO-178C, with the same change control rigor as requirements documents.
Supplier oversight is now your oversight
The classical airworthiness model pushes a lot of responsibility onto suppliers. DO-326A does not let you offload security that way. Applicants remain accountable for the full threat scenario catalogue, which means you have to either inherit credible security evidence from each supplier or re-derive it yourself. In the real world, this looks like:
- Contractual clauses that require suppliers to deliver SBOMs, vulnerability disclosures, and secure development process evidence alongside the code drop.
- Independent assessment of supplier build environments, because if a compromised supplier CI pipeline injects malware into an LRU firmware image, the regulator will ask you what controls you had in place.
- Continuous monitoring of supplier components after delivery. If a new CVE lands on a library embedded deep in a TCAS computer, you need to know before the operator does.
This is where most programs struggle. Aerospace supplier relationships are long, paperwork-heavy, and frequently governed by contracts written before any of this language existed. Retrofitting security supply chain obligations into existing statements of work is hard, and retrofitting them in time for a certification milestone is harder.
Build environment integrity
DO-356A explicitly addresses the development and verification environments. The argument is simple: if the build system is compromised, no amount of source-level review can save you. Certifying authorities increasingly expect evidence that build toolchains are pinned, that build reproducibility has been demonstrated, and that the cryptographic chain from source commit to signed binary is intact.
For programs that have been running the same Jenkins instance for a decade, this is a significant uplift. SLSA-style build provenance, hermetic builds, and tamper-evident artifact storage are no longer DevSecOps nice-to-haves; they are the evidence package that unlocks type certification.
Continued airworthiness in a security context
A DO-326A program does not end at entry into service. DO-355, Information Security Guidance for Continuing Airworthiness, extends the process to the operational lifecycle. The expectation is that operators and type certificate holders have a living view of the threat landscape against the fielded fleet, including new vulnerabilities that affect components in service. That view is powered, again, by a current SBOM, a CVE feed, and a process for determining whether a newly disclosed vulnerability is exploitable in the specific aircraft configuration.
When a log4j-class event happens to an aviation component, the question regulators will ask is not "did you patch quickly." It will be "how long did it take you to know you were affected, and could you prove which tail numbers contained the vulnerable component."
What engineering teams should prioritize now
Programs heading toward type certification or major amendment should be converging on a short list of concrete capabilities. Maintain a machine-readable SBOM per configuration item, with build provenance that links to a signed artifact. Treat component selection as a security decision with a justification entry in the configuration management record. Build a threat scenario library that cross-references the SBOM, so a new CVE can be mapped to affected scenarios in minutes rather than weeks. Extend security obligations into supplier agreements, and back those obligations with artifact exchange mechanisms that do not depend on emailed ZIP files.
None of this is exotic compared to modern software engineering practice. What makes it hard in aviation is the long certification timelines, the supplier tiering, and the fact that any meaningful change to process has to be traceable, auditable, and consistent with the rest of the airworthiness system.
How Safeguard Helps
Safeguard was built for programs that need certification-grade supply chain evidence without turning the security team into a document factory. It ingests SBOMs in CycloneDX and SPDX formats at the configuration-item level, tracks component provenance through signed build attestations, and maps CVE disclosures to the exact artifacts where they are relevant. Threat scenarios and assets can be linked to components so that when a new vulnerability lands, the impact analysis is a query rather than a project. For DO-326A and DO-355 programs, this compresses the evidence package assembly and the continued airworthiness monitoring loop into a single auditable workflow.