Compliance

DORA Third-Party ICT Risk for Financial Services 2026

A senior engineer's view of DORA third-party ICT risk in 2026: register of information, concentration risk, subcontractor depth, and the operational controls regulators actually test.

Shadab Khan
Security Engineer
8 min read

The Digital Operational Resilience Act moved from "impending" to "enforced" during 2025, and 2026 is the year where regulators across the EU have started asking pointed questions about implementation quality. Third-party ICT risk is the area where most financial institutions thought they were in good shape based on existing TPRM programs — and where the gap to DORA's actual expectations has turned out to be wider than expected. The regulator is not asking whether you have a TPRM program; they're asking whether your program can answer specific operational questions about your third-party dependencies within defined timeframes.

This is the view from the technical trenches. If you're an engineering leader, a CISO, or a head of operational resilience at a financial institution, this is what the 2026 regulatory reality looks like from the inside of a compliance effort that's actually working.

What makes DORA third-party ICT risk different from pre-existing TPRM?

Pre-DORA, most financial TPRM programs were annual-cycle, survey-driven, and focused on the immediate vendor relationship. DORA moves all three axes. The cycle is continuous, not annual. The data is operational and evidence-based, not self-attested. And the focus extends multiple layers deep — into the subcontractors of your vendors and the ICT services supporting critical functions.

The register of information is the centerpiece. Every financial entity subject to DORA maintains a structured register listing every ICT third-party service provider, the functions they support, the contractual terms, and the operational risk profile. The register isn't just a list; it's a machine-readable artifact that regulators can request and reconcile against your other filings. The format is specified, the frequency is specified, and the accuracy expectations are high enough that "we'll clean it up before the next regulator visit" is not a workable strategy.

The second differentiator is the criticality classification. DORA distinguishes between ICT services supporting critical or important functions (the stringent set) and the rest. The classification is not self-service — your critical function designations are driven by the regulator's definitions and your impact tolerance assessments, and they drive what controls apply to each vendor relationship.

How do I build and maintain the register of information?

Treat the register as data infrastructure, not a document. Build it as a versioned, signed, continuously updated dataset that pulls from authoritative upstream sources: your contract management system, your asset inventory, your CMDB, your TPRM intake, your code repositories, and your SBOM tooling. A register that's maintained manually in a spreadsheet will diverge from operational reality within a quarter.

The schema is dictated by the ITS on the register of information. Implement it literally. Each line item needs the vendor identifier (LEI where available), the function supported, the contract reference, the criticality classification, the data types processed, the jurisdictions involved, and the subcontracting arrangement. Don't invent additional fields or reshape the schema for internal preferences; regulators will match your submission against the canonical format.

For software vendors and open source components specifically, the register intersects with your SBOM program. Every open source component embedded in a commercial vendor product is part of the provider's ICT service and should be reflected somewhere in your assessment. Most registers do this badly because the visibility stops at the vendor boundary. A better approach is to require vendor-provided SBOMs as part of the contract and to validate them against your own scans when you deploy the vendor's software.

How does DORA treat subcontractor depth?

DORA requires that your risk assessments and controls propagate to subcontractors supporting critical or important functions. In practice, this means you need to know not just that Vendor A supports your payments function, but that Vendor A subcontracts the authentication piece to Vendor B, which uses Vendor C for its HSM service, which in turn has Vendor D as its cloud provider. Each of those layers can carry operational risk that impacts your critical function.

The regulator's patience with "we rely on our direct vendor to manage their subcontractors" has worn thin. Expect to see explicit expectations for subcontractor visibility at least two layers deep for critical ICT services, with contractual rights to audit and to require subcontractor changes to be disclosed. The engineering implication is that your TPRM data model needs to support nested vendor relationships, not just flat vendor-to-service mappings.

Concentration risk emerges naturally from this visibility. If Vendor B, Vendor C, and Vendor D are all used across multiple of your vendor relationships — which is common for cloud providers, identity providers, and payments infrastructure — the true concentration is higher than the surface-level vendor list suggests. Your register needs to surface this rollup, and your resilience testing needs to include scenarios where a shared sub-provider is impacted.

What controls does the regulator actually test?

Three categories are the focus of current examinations. The first is contract coverage for critical third-party services. Every contract for a critical ICT service needs the DORA-mandated clauses: access and audit rights, service level agreements, exit strategies, data processing terms, cooperation with resolution authorities, and the right to terminate under specific conditions. Regulators ask for samples of contracts and walk through the clauses with counsel. Missing or weak clauses produce remediation plans with defined deadlines.

The second category is testing — both resilience testing and TLPT (threat-led penetration testing) for the entities subject to it. Your testing needs to explicitly include third-party dependencies: the scenario of a critical provider failure, the scenario of a provider being compromised, and the scenario of your recovery procedures depending on a provider whose posture you can't verify. Paper-based playbooks don't count; the regulator wants evidence of exercises executed, with observed gaps and remediation.

The third category is incident reporting. DORA has specific notification requirements for major ICT-related incidents, and third-party incidents that affect your services trigger those requirements regardless of who owns the affected system. Your incident process needs to ingest third-party status feeds, correlate them with your critical function impact, and produce regulator notifications within the mandated timelines. Automation here is not optional; the timelines are tight.

How should I handle concentration risk and critical third-party providers?

The European Supervisory Authorities designate certain ICT third-party service providers as "critical" at the EU level, and those designations carry additional oversight obligations for both the provider and the financial entities that use them. Track the CTPP list and match it against your register continuously. A new CTPP designation for a provider you use is a trigger for additional controls, not an administrative update.

Concentration risk analysis is a board-level topic now. Map the share of your critical function capacity that depends on each CTPP, each cloud provider, and each critical ICT input. A concentration that exceeds your impact tolerance is an issue you need a remediation plan for, and the remediation plan needs realistic timelines — "multi-cloud by 2029" is a plan; "consider alternatives" is not.

Exit plans are a specific expectation. For each critical third-party service, you need an exit plan with defined RTOs, specific successor arrangements (or at least a qualified shortlist), and rehearsed technical procedures. Exit plans that exist only as written documents fail the regulator's operational test. Testing the exit plan — even partially, by rehearsing data portability or failover — is what makes it credible.

What does good engineering evidence look like under DORA?

Good engineering evidence under DORA has three properties: it's produced by your operational systems as a byproduct of work, it's immutable once produced, and it maps to a specific regulatory expectation. The register of information is the most visible example, but the same principle applies to your SBOM archives, your contract clause tracking, your incident notifications, and your testing outputs.

Automate the evidence production. Human-authored quarterly reports will diverge from operational reality and produce inconsistencies regulators will find. System-generated evidence — a register produced by the CMDB plus the contract system plus the SBOM pipeline — stays current because the work of producing it is the same as the work of keeping the register accurate.

Finally, make the evidence cross-referenceable. Regulators asking a question about a specific critical function should be able to trace from the register to the underlying contracts, the vendor's SBOM, your resilience test results, and the incident reports involving that vendor. If every question requires a manual forensic exercise, your evidence system isn't operating at DORA's expectations.

How Safeguard.sh Helps

Safeguard.sh integrates directly with your TPRM and contract systems to produce a continuously updated register of information, with SBOM data feeding the software components side of each vendor relationship. Griffin AI applies reachability analysis at 100-level depth so you can distinguish critical function impact from nominal exposure when a third-party vulnerability surfaces, sharpening your DORA impact classifications. Eagle monitors CTPP designations, subcontractor events, and vendor incidents continuously and correlates them with your critical function mapping, producing regulator-ready incident notifications inside the mandated timelines. Container self-healing orchestrates the technical side of exit plans and resilience testing, rebuilding dependencies against alternate providers without manual runbook execution. The outcome is a third-party ICT risk posture that produces its own evidence as a byproduct of operations — the continuous, machine-verifiable evidence DORA examinations are designed to find.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.