Large European enterprises in finance are entering 2025 with two realities: a mandate to strengthen their operational resilience (under the Digital Operational Resilience Act, or DORA) and a surge of interest in generative AI to drive innovation. DORA is now in effect (as of January 17, 2025) and applies broadly to banks, insurers, investment firms, payment providers, and many other financial entities — plus their critical IT vendors. In plain terms, DORA tells financial institutions: get your digital house in order, and make sure a tech glitch or cyber incident can't crash the system. This includes oversight of any AI systems that institutions deploy. As one commentary quipped, DORA basically warns that your "fancy AI systems better not crash the whole bank". In this post, we break down what DORA requires, what it means for companies deploying Generative AI, and how Kosmoy can help enterprises meet DORA's demands through robust AI governance, monitoring, and control.
What is DORA and Who Must Comply?
DORA (Digital Operational Resilience Act) is a European Union regulation designed to ensure that the financial sector — and its ICT (Information and Communication Technology) providers — can withstand digital disruptions. It was passed in late 2022 and becomes fully applicable in January 2025, bringing a harmonized EU-wide framework for digital resilience. In scope are over 20 types of financial institutions (from banks and payment firms to insurers and asset managers), as well as any critical ICT service providers they rely on. Notably, third-party tech vendors — including cloud platforms, software providers, and potentially AI service providers — fall under DORA if they serve EU financial entities, even if those vendors are outside the EU. In short, if you're a large European financial enterprise (or a vendor supporting one), DORA likely applies to you.
Key requirements of DORA can be summarized across five pillars:
ICT Risk Management: Firms must implement an internal governance and control framework to manage ICT risks (including AI risks). This means identifying threats, protecting systems, detecting incidents, responding, and recovering — essentially a full lifecycle for digital risk. Boards and senior management are accountable for setting risk tolerance and ensuring policies exist to control tech usage.
Incident Reporting: Significant ICT incidents (like cyberattacks or major outages) must be reported to regulators within tight deadlines. For example, a major incident might require an initial report within 4 hours of classification, updates after 24–72 hours, and a final report within one month. DORA standardizes this process so authorities get timely, detailed notices of disruptions.
Digital Operational Resilience Testing: Financial institutions need to regularly test their systems — including worst-case scenario simulations — to ensure they can withstand and recover from disruptions. This isn't a one-off checkbox; it's continuous testing of backup plans, cybersecurity, failover mechanisms, and yes, the resilience of any AI systems in use.
ICT Third-Party Risk Management: Firms must rigorously manage risks from external tech providers. This includes vetting providers, contractual requirements for security and continuity, ongoing monitoring, and having exit/backup plans. If you rely on a third-party AI service or cloud platform, DORA effectively says you're on the hook for their resilience, too. (DORA even empowers regulators to designate certain providers as "critical" and subject them to direct oversight.)
Information Sharing Arrangements: DORA encourages the sharing of threat intelligence among financial firms. Organizations should have mechanisms to exchange information on cyber threats and vulnerabilities within the industry. While this is more about cybersecurity collaboration than AI specifically, any incidents involving AI (e.g. a novel prompt-injection attack) could be part of such information-sharing efforts.
Crucially, DORA is not just about compliance paperwork. The EU is aiming to instill a culture of resilience. Regulators want to see that firms treat operational resilience as a board-level priority and have concrete measures in place, not just policies on paper. The upside is that by fortifying digital operations, firms not only avoid regulatory penalties but also reduce the real risk of devastating outages or breaches.
DORA Meets Generative AI: Implications for AI Deployments
How does all this translate for organizations deploying generative AI solutions? Generative AI (like large language model chatbots, content generators, code assistants, etc.) introduces exciting capabilities — but also new operational risks and governance challenges. Under DORA's lens, any AI system used in a critical business process must be as resilient and well-controlled as any other important IT system. Here are the key implications:
Governance & Oversight: DORA expects firms to have strong control over all ICT usage. Generative AI cannot be an unchecked experiment running on the side. Enterprises need clear AI usage policies, approval processes, and oversight mechanisms. For instance, if different teams are building chatbots, a central function should govern how these bots are developed and what they're allowed to do. AI should be part of the ICT risk framework — with the board and risk managers informed about AI initiatives and their potential impact.
Operational Resilience of AI Systems: Generative AI applications must be reliable and robust. If, say, a customer-facing chatbot or an AI-driven trading assistant goes down or malfunctions, it could disrupt services. Under DORA, firms should design AI solutions with redundancy and failure management. For example, if using an external AI API, have fallback models or a contingency plan if that service is unavailable.
Continuous Monitoring & Incident Detection: Generative AI systems can fail in complex ways — e.g. producing inappropriate or biased content, or being manipulated by malicious inputs. DORA's emphasis on continuous monitoring of ICT systems means enterprises should be actively watching their AI's behavior and performance.
Third-Party AI Services & Vendor Risk: Many generative AI deployments rely on external models or APIs (e.g. using OpenAI, Azure AI, etc.). From DORA's perspective, these are ICT third-party services. Firms must evaluate and mitigate the risks of relying on them.
Reporting and Audit Trails: If an AI-related incident occurs (say, a data leak via a generative model or a system outage caused by an AI integration), it will need to be reported under DORA's scheme. That requires having detailed logs and records of AI system events.
It's worth noting that DORA and the EU AI Act overlap in interesting ways. The EU AI Act classifies AI systems by risk and imposes requirements like data governance, transparency, and human oversight for "high-risk" AI. For a financial firm using AI, you can't treat these regulations in isolation. Practically, this means a firm's AI governance framework should address operational resilience (DORA) and ethical/quality compliance (AI Act) together.
How Kosmoy Helps Ensure DORA-Compliant AI Adoption
Kosmoy is an enterprise GenAI platform explicitly designed to accelerate AI adoption while maintaining robust governance and control. For organizations facing DORA requirements, Kosmoy offers a comprehensive toolset to cover each pillar of operational resilience as it relates to AI.
Safety Policies via LLM Gateway & Guardrails: Kosmoy provides an LLM Gateway — a middleware that all AI model requests pass through. Think of it as a checkpoint where the company's rules are enforced. Through the gateway, you can specify which AI models are allowed and inject guardrails that automatically filter or block unsafe content. Kosmoy even has an EU AI Act guardrail: a small fine-tuned model that flags if a request would violate the AI Act.
Continuous Monitoring and Audit Trails: Kosmoy logs every interaction passing through the AI gateway — creating a conversation log that compliance officers can review if needed. The Insights Dashboard provides real-time monitoring of AI usage and performance. Key features include LLM cost tracking, user feedback tracking, and automatic alerts for problematic outputs.
Fine-Grained Access Control (RBAC) and Traceability: Kosmoy supports role-based access control (RBAC) for AI usage and administration. This means you can define, for example, that only the data science team can create new AI agents, or that a customer service bot can only be accessed by support staff. Every query and action is tied to a user identity in the logs, providing traceability.
No-Code AI Development and RAG-in-a-Box — with Guardrails Built-In: Kosmoy's platform includes a no-code Studio and a "RAG-in-a-Box" toolkit for Retrieval-Augmented Generation, which allow teams to build AI solutions quickly and in a standardized way. Teams don't need to reinvent the wheel; they can assemble AI applications through Kosmoy's studio with guardrails, logging, and access control inherently applied.
On-Premises and Multi-Cloud Deployment for Sovereignty & Vendor Risk Mitigation: Kosmoy offers flexible deployment options — including on-premises and multi-cloud installations. With Kosmoy, the entire GenAI platform can be deployed within the enterprise's own environment or private cloud, meaning data and AI workloads stay under your control. Kosmoy also supports multi-LLM routing through its LLM Router, allowing you to distribute AI calls across providers or swap models if one fails.
Built-in Compliance Filters: Kosmoy includes the EU AI Act filter that effectively classifies AI use cases in real-time and prevents uses that would breach the AI Act's Article 5 (prohibited practices).
Real-World Use Cases: Compliant AI in Action
DORA-Compliant AI for a BPO Provider: Consider a Business Process Outsourcing company that handles back-office operations for European banks. One BPO with approximately 4,000 employees and EUR 500M revenue needed to deploy an AI assistant for employee productivity, but had to ensure DORA compliance from day one. They chose Kosmoy for an on-premise GenAI solution. Using Kosmoy's AI Gateway and Guardrails, they configured the assistant such that it would not output any sensitive client data. By proactively embedding DORA's principles via Kosmoy, the BPO not only complied with the law but also gained a competitive edge.
Safe AI Innovation at a Central Bank: Even central banks are exploring generative AI to enhance their work. One national central bank with approximately 5,000 employees piloted Kosmoy to build internal AI assistants for staff engagement and to provide a public-facing chatbot for recruiting queries. With Kosmoy Studio, they quickly developed a chatbot that could answer employees' HR questions, while the guardrails ensured the bot would not stray into disallowed territory. The central bank used the multi-LLM routing to pair an open-source model (running locally for privacy) with a cloud model for more complex queries — achieving both sovereignty and performance.
Bringing It All Together
Achieving DORA compliance while rolling out generative AI may sound complex, but with the right approach it becomes an opportunity to strengthen your digital strategy. Large European enterprises don't have to choose between embracing AI innovation and meeting stringent regulations like DORA — they can do both. DORA is ultimately about ensuring stability and trust in the financial system's digital fabric. Generative AI, when governed properly, can become a reliable part of that fabric. With Kosmoy's gateways, guardrails, monitoring, and flexible deployment, organizations get the peace of mind that their AI is under control, compliant, and resilient.
Kosmoy is here to help you be on the right side of that journey — accelerating GenAI adoption, and ensuring that no matter what happens, the lights stay on and the regulators stay happy.
