We use only essential, cookie‑free logs by default. Turn on analytics to help us improve. Read our Privacy Policy.
Back to blog
DORAAI ActFintechComplianceAWSGoogle CloudFinancial ServicesRegulatory

DORA + AI Act: The Double Compliance Hit for Fintech in 2026

19 critical ICT providers now under EU supervision. Your AWS/Google/Microsoft AI services face new regulatory oversight. Here's what it means for your AI strategy.

January 13, 202610 minMaryna Vyshnyvetska

DORA + AI Act: The Double Compliance Hit for Fintech in 2026


The Perfect Storm

On November 18, 2025, the European Supervisory Authorities dropped a bomb: they designated 19 critical ICT third-party providers under DORA.

The list reads like a who's who of cloud and AI infrastructure:

  • AWS (Amazon Web Services Europe)
  • Google Cloud
  • Microsoft (Azure)
  • IBM
  • Oracle
  • SAP
  • Accenture
  • Capgemini
  • Bloomberg
  • Deutsche Telekom
  • Tata Consultancy Services

For the first time ever, these tech giants face the same regulatory scrutiny as the financial institutions they serve.

Source: European Banking Authority press release


Why This Matters for Your AI Strategy

Here's the scenario: You're a European bank using AWS to run your fraud detection AI. Or a fintech using Google Cloud for credit scoring models.

You're now caught in a regulatory sandwich:

1. DORA (active since January 17, 2025) — governs your ICT risk management, including AI systems

2. AI Act (high-risk requirements from August 2026) — governs your AI specifically for credit scoring and fraud detection

Same AI system. Two regulatory frameworks. Both enforceable.


The DORA Requirements You Can't Ignore

DORA isn't optional fluff. It's law with teeth.

What You Must Do Now:

ICT Risk Management Framework

Your AI systems must be included in comprehensive risk assessments covering identification, protection, detection, response, and recovery.

Third-Party Oversight

If you use external AI providers (and you probably do), you need:

  • Formal due diligence documentation
  • Contractual provisions for DORA compliance
  • Exit strategies if providers fail
  • Continuous monitoring of provider risk

Incident Reporting

AI system failures that disrupt financial services must be reported to regulators. Not "eventually." Within prescribed timelines.

Registers of Information

You must maintain detailed registers of ICT third-party arrangements. By April 30, 2025, competent authorities received these from financial entities.

The Critical Provider Wrinkle

If your AI provider is on the critical list (AWS, Google, Microsoft...), they're now under direct ESA supervision. This means:

  • Annual risk assessments
  • On-site inspections
  • Potential fines up to 1% of global daily turnover

You're downstream of this supervision. Your contracts, your risk management, your incident response — all affected.


The AI Act Layer

As if DORA weren't enough, the AI Act adds specific requirements for financial AI:

High-Risk Classification

Credit scoring and fraud detection are explicitly classified as high-risk AI systems. This triggers:

  • Conformity assessments before deployment
  • Human oversight requirements — "the AI said no" isn't acceptable
  • Technical documentation — how the model works, what data it uses, how it's tested
  • Bias testing — demonstrated fairness across population groups
  • Transparency — explainable decisions for affected individuals

Timeline

  • August 2025: Prohibited AI practices take effect
  • August 2026: High-risk AI requirements fully enforced

You have 7 months from the time of this writing to get your house in order.


The Intersection: Where DORA Meets AI Act

The European Banking Authority clarified something important: there are no significant contradictions between DORA and the AI Act. You can build one compliance framework that satisfies both.

Here's how they map:

| Requirement | DORA | AI Act |

|-------------|------|--------|

| Risk Management | ICT risk frameworks | AI-specific risk assessment |

| Third-Party Oversight | Provider monitoring | Value chain governance |

| Documentation | ICT registers | Technical documentation |

| Incident Response | Breach reporting | Serious incident notification |

| Human Oversight | Not explicit | Mandatory for high-risk |

| Explainability | Not explicit | Mandatory under GDPR Art. 22 |

The key insight: Build your AI governance from the AI Act requirements (more specific), and it will largely satisfy DORA (more general). Then add DORA-specific elements for ICT resilience.


What This Means Practically

If You Use Cloud AI Providers

Your AWS/Google/Microsoft AI services are now under enhanced regulatory oversight. You need to:

1. Review contracts for DORA-compliant provisions

2. Document your due diligence on provider risk

3. Map dependencies — which critical functions rely on which providers?

4. Plan for substitutability — what happens if a provider fails or exits?

If You Build In-House AI

You're not off the hook. DORA applies to your internal ICT systems too. But you avoid the third-party risk layer.

Consider:

  • On-premise deployment for sensitive AI workloads
  • Private cloud with EU-based providers
  • Hybrid architectures that keep critical data on your infrastructure

If You're Starting Fresh

This is actually the best time to start. You can build compliance-by-design instead of retrofitting.


The Hidden Opportunity

Everyone's complaining about compliance burden. Here's what they're missing:

Compliance is a moat.

The institutions that build compliant AI infrastructure now will have:

  • Proven systems when enforcement begins
  • Operational learning competitors can't buy
  • Regulatory goodwill from proactive engagement

The 2025-2026 window is a gift. Use it.


Practical Next Steps

1. Map Your AI Landscape

Where are you using AI? Which systems touch financial decisions? Which providers are involved?

2. Assess Against Both Frameworks

Run your AI systems through DORA requirements AND AI Act high-risk requirements. Identify gaps.

3. Prioritize Third-Party Risk

If you're dependent on critical providers, start the governance work now. Contracts take time.

4. Build the Documentation

Both frameworks require extensive documentation. Start the registers, risk assessments, and technical documentation before enforcement.

5. Design Human Oversight

The AI Act requires human-in-the-loop for high-risk decisions. Build the workflows, train the people, implement the escalation paths.


How Kenaz Can Help

We specialize in compliant AI infrastructure for European financial institutions.

Our services:

  • [AI Data Preparation](/services/ai-data-preparation) — anonymization, pseudonymization, audit trails
  • [GDPR/DORA Compliance](/services/gdpr-hipaa-compliance) — technical controls that survive audits
  • [MCP Integration](/services/mcp-integration) — connect AI to your systems with proper governance
  • [RAG & Knowledge Systems](/services/rag-knowledge-systems) — compliant knowledge bases for financial operations

Our approach:

We build AI that runs on your infrastructure, with your data, under your control. Third-party risk minimized. Audit trail maximized.

Request a Fintech AI Assessment →


The Bottom Line

DORA is active. AI Act is coming. The critical provider list is published. The regulatory environment for financial AI is crystallizing.

You can either:

  • React when enforcement hits and scramble to retrofit compliance
  • Build compliant infrastructure now and have it operational when you need it

The institutions that treated GDPR as an opportunity — not a burden — are now the ones with the strongest data governance. The same will be true for DORA + AI Act.

The clock is ticking. Choose wisely.

Need help with AI integration?

Book a free consultation. We'll help you identify real opportunities — not just shiny tools.

Book a Call