In Part 1 of this series, we examined the five failure modes that define DO-326A compliance in practice — generic Security Objectives, disconnected threat analysis, broken traceability, single-expert dependencies, and late scope discovery. We diagnosed the problem. Here, we go deeper into the technical mechanics of why the two hardest tasks in DO-326A analysis — Security Objective identification and traceability management — are substantially more difficult than they appear, and what rigorous execution of each actually requires.
Understanding this is not academic. It is the prerequisite for understanding what automation can and cannot replace in a certification programme — which we address directly in Part 3.
What a Security Objective Actually Is
The term "Security Objective" appears throughout DO-326A/ED-203A, but its precise meaning is frequently misunderstood — even by experienced practitioners. This misunderstanding is the root cause of the generic Security Objective problem we identified in Part 1.
A Security Objective in the DO-326A sense is not a system requirement. It is not a design principle. It is not a policy statement. It is a specific, attributable, falsifiable claim about what security property a defined aircraft function must maintain in order to prevent an identified failure condition.
The structure of a well-formed Security Objective has four components:
Subject — the specific aircraft function or system capability that must be protected. Not "the FMS" but "the FMS navigation database update function." Not "the ADIRU" but "the ADIRU attitude data output to the primary flight display."
Property — the specific security property that must be maintained: integrity, availability, or confidentiality. These are not interchangeable. An integrity objective means unauthorised modification must be prevented. An availability objective means denial of service must be prevented. A confidentiality objective means unauthorised disclosure must be prevented. Each implies different threat vectors, different mitigations, and different evidence requirements.
Condition — the failure condition from the aircraft safety assessment that would result if the security property were violated. This is the link that connects the security analysis to the safety assessment, and it is non-negotiable. A Security Objective without a linked failure condition is not anchored in the aircraft's safety baseline — and regulators will notice.
Level — the severity of the linked failure condition, which determines the rigour of analysis required. A Catastrophic failure condition requires a different depth of security analysis than a Major one.
A properly formed Security Objective for an FMS function might read: "The integrity of navigation database records accessed by the FMS trajectory computation function shall be maintained against unauthorised modification, as loss of integrity could cause an erroneous flight path that contributes to a Catastrophic failure condition."
Compare this to: "The FMS shall be protected against cybersecurity threats." Both appear in DO-326A compliance documentation at programmes today. Only one supports a meaningful threat analysis.
The Identification Problem: Why Experts Disagree
If Security Objectives have a clear structure, why is identifying them so difficult?
The challenge is that identifying Security Objectives requires simultaneously holding several complex models in mind: the functional architecture of the system, the safety assessment, the operational threat environment, and the standard's taxonomy — and applying all of them to the specific system under analysis.
Consider a Traffic Collision Avoidance System (TCAS II). The safety assessment identifies several failure conditions: failure to issue a Resolution Advisory when required, issuance of an incorrect Resolution Advisory, and nuisance alerts. Each has a severity classification. Each is potentially reachable through a cybersecurity attack.
But the analysis does not stop there. TCAS II receives surveillance data from the transponder and from other aircraft via ACAS protocols. It processes that data and outputs advisories to the crew. Each interface, each data flow, and each processing step is potentially a target for attack. The Security Objectives for TCAS must cover not just the TCAS unit itself but the integrity of its inputs, the integrity of its processing, and the availability of its outputs — all mapped to specific failure conditions.
Now multiply this across a modern aircraft's avionics suite. A typical single-aisle commercial aircraft has upwards of 50 LRUs with significant avionics functionality. Each has its own functional architecture, its own interfaces, its own contribution to the aircraft's failure condition landscape.
A thorough Security Objective analysis at the aircraft level — not just the LRU level — must also account for attack paths that traverse multiple systems. An attack on an IFE system that crosses into the avionics network and reaches a flight-critical LRU does not appear in the security analysis of any individual LRU. It only appears when the analysis is conducted at the aircraft level, with a complete picture of inter-system connectivity.
This is why Security Objective identification cannot be reduced to a per-LRU checklist exercise. It requires genuine architectural reasoning about the aircraft as a system of systems — anchored in the safety assessment at every step.
Interface Control Documents: The Primary Source of Complexity
If Security Objectives define what must be protected, Interface Control Documents (ICDs) define the attack surface that must be analysed. A mature FMC ICD might document several hundred distinct interfaces.
For each interface, the security analyst must determine:
Is this interface a potential attack vector? An ARINC 429 input from a ground data loader is an obvious attack vector. An ARINC 429 output to a display unit is less obviously one, but could be relevant if the display unit has external connectivity that creates a path back into the LRU.
What data does this interface carry, and what failure conditions are reachable through its corruption? A parameter that feeds directly into a navigation calculation has a different security profile than a parameter used only for maintenance logging.
What is the trust relationship between source and destination? An interface between two LRUs within the same certified aircraft system has a different trust profile than an interface to a ground support system or a third-party data provider.
What existing protections are present, and are they adequate? AFDX's virtual link mechanism provides a degree of separation. Others may rely entirely on physical security or procedural controls.
Answering these questions for several hundred interfaces in a single LRU's ICD is a multi-week exercise for a skilled analyst. This is exactly the bottleneck that CompliAir addresses — processing ICDs in minutes and generating structured starting points for expert review.
Threat Conditions: From Objectives to Attack Scenarios
Once Security Objectives are identified, the analysis must derive threat conditions: specific scenarios in which a threat actor could cause a Security Objective to be violated.
A threat condition must specify:
The threat actor — who might conduct this attack, and what are their capabilities and motivations? A nation-state actor with advanced persistent threat capabilities is different from a disgruntled employee with insider access.
The attack vector — how does the attacker reach the system? Through a ground connectivity interface during maintenance? Through the IFE network? Through a supply chain compromise?
The attack method — what does the attacker actually do? Inject a corrupted data word into an ARINC 429 bus? Replay a valid but stale navigation database update? Exploit a buffer overflow in an avionics OS?
The resulting violation — which specific Security Objective is violated, and how?
This is where the value of VulnAirabilityDb becomes concrete. A threat condition analysis not informed by actual vulnerability intelligence for ARINC 429, AFDX, MIL-STD-1553, and ACARS will miss entire classes of attack. VulnAirabilityDb contains vulnerability entries for avionics protocols that do not appear in NVD, MITRE, or any general-purpose threat feed — because those databases were not built for aviation.
Traceability: The Problem That Compounds Over Time
DO-326A requires a complete, auditable chain from every Security Objective through every threat condition, through every security requirement, to implementation evidence. This chain must be maintained throughout the programme lifecycle.
The challenge of traceability is not its initial construction — it is its maintenance under change. Aircraft programmes are not static. Systems evolve. Interfaces change. Architectural decisions are revised.
The mechanism of degradation is consistent across programmes: a design change is assessed for safety and software impact, but cybersecurity impact assessment is skipped or deprioritised. The Security Objective and threat condition records are not updated. The traceability record drifts from the current system configuration.
By certification review, the traceability record may accurately reflect a version of the system that was superseded months or years earlier. The cost of reconciling a significantly drifted traceability record can be substantial — measured in months, not weeks.
The solution is automated change impact analysis: a mechanism that, when a design document changes, identifies which Security Objectives and threat conditions are potentially affected and flags them for review. This requires semantic understanding of the relationship between the design change and the Security Objective — which is precisely what LLM-powered tools provide.
What Rigorous DO-326A Analysis Looks Like
A rigorous DO-326A analysis for a complex aircraft programme has the following characteristics:
Coverage is complete at the aircraft level. Every aircraft function with a security-relevant failure condition has at least one Security Objective. Attack paths that cross LRU boundaries are explicitly identified and analysed.
Security Objectives are specific, attributable, and linked. Every Security Objective names a specific function, specifies the security property, and links to a specific failure condition with its severity classification.
Threat conditions are grounded in operational threat intelligence. Attack methods are informed by real vulnerability intelligence for the specific protocols in use — not just generic categories.
Traceability is complete and current. Every design change that affects a security-relevant system triggers a documented review of the affected Security Objectives and threat conditions.
The analysis survives authority scrutiny. EASA and FAA reviewers can trace any security requirement back through the complete chain to the system function it protects.
This is achievable — and it is the standard that CompliAir is built to help programmes reach. In Part 3, we explain specifically how.
CompliAir automates DO-326A Security Objective extraction, threat condition derivation, and traceability generation — providing expert engineers with a structured starting point for review rather than requiring them to build the analysis from scratch. VulnAirabilityDb provides the on-premise vulnerability intelligence that grounds threat condition analysis in operational reality.