top of page

Deploying Generative AI Under DORA: Ensuring Compliance and Resilience with Kosmoy

  • Writer: Umberto Malesci
    Umberto Malesci
  • Sep 22
  • 17 min read

Large European enterprises in finance are entering 2025 with two realities: a mandate to strengthen their operational resilience (under the Digital Operational Resilience Act, or DORA) and a surge of interest in generative AI to drive innovation. DORA is now in effect (as of January 17, 2025) and applies broadly to banks, insurers, investment firms, payment providers, and many other financial entities – plus their critical IT vendors. In plain terms, DORA tells financial institutions: get your digital house in order, and make sure a tech glitch or cyber incident can’t crash the system. This includes oversight of any AI systems that institutions deploy. As one commentary quipped, DORA basically warns that your “fancy AI systems better not crash the whole … bank”. In this post, we break down what DORA requires, what it means for companies deploying Generative AI, and how Kosmoy can help enterprises meet DORA’s demands through robust AI governance, monitoring, and control.


What is DORA and Who Must Comply?

DORA (Digital Operational Resilience Act) is a European Union regulation designed to ensure that the financial sector – and its ICT (Information and Communication Technology) providers – can withstand digital disruptions. It was passed in late 2022 and becomes fully applicable in January 2025, bringing a harmonized EU-wide framework for digital resilience. In scope are over 20 types of financial institutions (from banks and payment firms to insurers and asset managers), as well as any critical ICT service providers they rely on. Notably, third-party tech vendors – including cloud platforms, software providers, and potentially AI service providers – fall under DORA if they serve EU financial entities, even if those vendors are outside the EU. In short, if you’re a large European financial enterprise (or a vendor supporting one), DORA likely applies to you


Key requirements of DORA can be summarized across five pillar:

ICT Risk Management: Firms must implement an internal governance and control framework to manage ICT risks (including AI risks). This means identifying threats, protecting systems, detecting incidents, responding, and recovering – essentially a full lifecycle for digital risk. Boards and senior management are accountable for setting risk tolerance and ensuring policies exist to control tech usage.


Incident Reporting: Significant ICT incidents (like cyberattacks or major outages) must be reported to regulators within tight deadlines. For example, a major incident might require an initial report within 4 hours of classification, updates after 24–72 hours, and a final report within one month. DORA standardizes this process so authorities get timely, detailed notices of disruptions.


Digital Operational Resilience Testing: Financial institutions need to regularly test their systems – including worst-case scenario simulations – to ensure they can withstand and recover from disruptions. This isn’t a one-off checkbox; it’s continuous testing of backup plans, cybersecurity, failover mechanisms, and yes, the resilience of any AI systems in use.


ICT Third-Party Risk Management: Firms must rigorously manage risks from external tech providers. This includes vetting providers, contractual requirements for security and continuity, ongoing monitoring, and having exit/backup plans. If you rely on a third-party AI service or cloud platform, DORA effectively says you’re on the hook for their resilience, too. (DORA even empowers regulators to designate certain providers as “critical” and subject them to direct oversight.)


Information Sharing Arrangements: DORA encourages the sharing of threat intelligence among financial firms. Organizations should have mechanisms to exchange information on cyber threats and vulnerabilities within the industry. While this is more about cybersecurity collaboration than AI specifically, any incidents involving AI (e.g. a novel prompt-injection attack) could be part of such information-sharing efforts.

Crucially, DORA is not just about compliance paperwork. The EU is aiming to instill a culture of resilience. As FTI Consulting notes, it’s a “comprehensive push” covering everything from vendor contracts to cybersecurity defenses. Regulators want to see that firms treat operational resilience as a board-level priority and have concrete measures in place, not just policies on paper. The upside is that by fortifying digital operations, firms not only avoid regulatory penalties but also reduce the real risk of devastating outages or breaches.


DORA Meets Generative AI: Implications for AI Deployments

How does all this translate for organizations deploying generative AI solutions? Generative AI (like large language model chatbots, content generators, code assistants, etc.) introduces exciting capabilities – but also new operational risks and governance challenges. Under DORA’s lens, any AI system used in a critical business process must be as resilient and well-controlled as any other important IT system. Here are the key implications:


Governance & Oversight: DORA expects firms to have strong control over all ICT usage. Generative AI cannot be an unchecked experiment running on the side. Enterprises need clear AI usage policies, approval processes, and oversight mechanisms. For instance, if different teams are building chatbots, a central function should govern how these bots are developed and what they’re allowed to do. AI should be part of the ICT risk framework – with the board and risk managers informed about AI initiatives and their potential impact. As Kosmoy’s CEO puts it, “AI governance means setting rules and enforcing them” across the organization. In practice, this means defining who can access AI models, what data can be used, and ensuring content standards (no leaking sensitive data or generating harmful content).


Operational Resilience of AI Systems: Generative AI applications must be reliable and robust. If, say, a customer-facing chatbot or an AI-driven trading assistant goes down or malfunctions, it could disrupt services. Under DORA, firms should design AI solutions with redundancy and failure management. For example, if using an external AI API, have fallback models or a contingency plan if that service is unavailable. Regular resilience testing should include AI scenarios: What if the AI gives incorrect outputs en masse? What if it faces a coordinated attack (like prompt injections to cause malfunctions)? Firms need to ensure that an AI glitch won’t cascade into a broader incident. DORA effectively makes this a requirement: “Your AI better not crash the bank” is more than a slogan – it means ensuring that if an AI system fails, there are controls or human overrides to prevent a disaster.


Continuous Monitoring & Incident Detection: Generative AI systems can fail in complex ways – e.g. producing inappropriate or biased content, or being manipulated by malicious inputs. DORA’s emphasis on continuous monitoring of ICT systems means enterprises should be actively watching their AI’s behavior and performance. This could include monitoring AI outputs for red flags (toxic language, hallucinations, policy violations) and tracking usage patterns for anomalies. Early detection is key: if something goes wrong with the AI, you may only have hours to report it to regulators if it qualifies as a major incident. Imagine an AI advisor that starts giving financially unsound recommendations due to a model error – without proper monitoring, that could go unnoticed until damage is done. Under DORA, firms need to catch such issues fast and have an incident response plan. This ties directly into AI governance: logging every AI interaction and outcome is not just good practice, it may be crucial for audit and incident analysis if regulators come knocking.


Third-Party AI Services & Vendor Risk: Many generative AI deployments rely on external models or APIs (e.g. using OpenAI, Azure AI, etc.). From DORA’s perspective, these are ICT third-party services. Firms must evaluate and mitigate the risks of relying on them. For example, if a bank uses a cloud AI service, DORA expects: due diligence on the provider’s security and reliability, contractual clauses about uptime and incident notification, and possibly even an exit strategy if the provider cannot meet requirements. DORA even stipulates that critical external providers establish an EU presence and comply with oversight. For AI, this means institutions might prefer providers that can deploy in-country or on-premises for sovereignty. It also means no single point of failure – institutions might use multiple AI models/providers to avoid being completely dependent on one third-party. In short, treating AI vendors like any other critical supplier: with rigorous risk assessments, audits, and contingency plans.


Reporting and Audit Trails: If an AI-related incident occurs (say, a data leak via a generative model or a system outage caused by an AI integration), it will need to be reported under DORA’s scheme. That requires having detailed logs and records of AI system events. Also, even absent incidents, regulators may ask firms to demonstrate compliance – for example, proving that an AI used for credit decisions is governed and has human oversight. The upcoming EU AI Act explicitly requires logging of AI system decisions and data for high-risk use cases, and DORA’s spirit is similar: transparency and auditability. Firms should maintain comprehensive logs of AI model inputs/outputs and user interactions. This not only helps with DORA, but also overlapping laws (like the EU AI Act’s record-keeping and transparency obligations for certain AI systems).


It’s worth noting that DORA and the EU AI Act overlap in interesting ways. The EU AI Act (expected to be enforced starting 2025–2026) classifies AI systems by risk and imposes requirements like data governance, transparency, and human oversight for “high-risk” AI. For a financial firm using AI, you can’t treat these regulations in isolation. As a FAIR Institute analysis points out, they “aren’t isolated issues. They’re tangled up, interconnected”. Poor data management could simultaneously breach DORA (if it undermines operational integrity) and the AI Act (if it leads to biased outcomes). Regulators are demanding “transparency, accountability, and demonstrable risk mitigation” for AI – which aligns with DORA’s focus on showing that you have resilience under control. Practically, this means a firm’s AI governance framework should kill two birds with one stone: address operational resilience (DORA) and ethical/quality compliance (AI Act) together. For example, maintaining a robust audit trail of AI decisions both satisfies AI Act logging needs and DORA’s expectation that you can investigate incidents. Ensuring data integrity and security in AI training data speaks to DORA’s concern for system integrity as well as the AI Act’s concern for bias and accuracy.


Bottom line: Generative AI must be deployed thoughtfully in regulated environments. You can absolutely leverage AI for efficiency and innovation (indeed, 60% of European financial firms invested in GenAI in 2023), but you must bake compliance and resilience into your AI projects from day one. That’s where platforms like Kosmoy come in – providing the tools to govern and monitor AI, so you can embrace GenAI without losing control.


How Kosmoy Helps Ensure DORA-Compliant AI Adoption

Kosmoy is an enterprise GenAI platform explicitly designed to accelerate AI adoption while maintaining robust governance and control. For organizations facing DORA requirements, Kosmoy offers a comprehensive toolset to cover each pillar of operational resilience as it relates to AI. In plain language, here’s how Kosmoy can support compliance in each area:


Safety Policies via LLM Gateway & Guardrails: Kosmoy provides an LLM Gateway – a middleware that all AI model requests pass through. Think of it as a checkpoint where the company’s rules are enforced. Through the gateway, you can specify which AI models are allowed (e.g. only certain approved LLM providers) and inject guardrails that automatically filter or block unsafe content. For example, Kosmoy’s built-in guardrails can censor toxic language, personal data (PII), off-topic requests, or prompt injection attacks before the AI’s response is delivered. If a user tries to prompt a chatbot into a disallowed area (say, asking a trading bot for insider tips or a support bot for somebody else’s account info), the guardrail will intercept that. These controls map directly to DORA’s ICT risk management and internal controls – they ensure that AI systems operate within defined safe bounds. Kosmoy even has an EU AI Act guardrail: a small fine-tuned model that flags if a request would violate the AI Act (for instance, asking the AI to do something that counts as prohibited monitoring or social scoring). This guardrail “SLM” (small language model) runs in real time and will outright block requests that drift into banned or high-risk AI usage. By enforcing content safety and usage policies consistently, Kosmoy helps enterprises demonstrate that AI usage is controlled and compliant, not a wild west. (No more relying on just a PDF of AI guidelines that employees may or may not follow – the rules are actively enforced in software.)


Continuous Monitoring and Audit Trails: Kosmoy was built with observability in mind. It logs every interaction passing through the AI gateway – creating a conversation log that compliance officers can review if needed. Moreover, Kosmoy’s Insights Dashboard provides real-time monitoring of AI usage and performance. Key features include LLM cost tracking (so you know who is using the AI and how much it’s costing), user feedback tracking, and automatic alerts for problematic outputs. For instance, if an AI response contains a toxic phrase or if someone attempts a prompt injection exploit, Kosmoy can flag or alert on that. This level of monitoring helps satisfy DORA’s expectations for active ICT monitoring and rapid incident detection. In case of an incident, you have detailed records to understand what happened (supporting the forensic analysis and reporting process). And even outside of crises, the logs and metrics serve as an audit trail to prove to regulators (or internal auditors) that your AI systems are behaving and that you’re keeping tabs on them. Kosmoy essentially gives the central AI or compliance team a “single pane of glass” to oversee all AI activity in the company, which is invaluable for both DORA and internal governance.


Fine-Grained Access Control (RBAC) and Traceability: A core aspect of governance is making sure that only the right people can use or influence AI systems, and that you can trace who did what. Kosmoy supports role-based access control (RBAC) for AI usage and administration. This means you can define, for example, that only the data science team can create new AI agents, or that a customer service bot can only be accessed by support staff. You can segment AI access by business unit, geography, or role. This containment aligns with DORA’s emphasis on accountability and containment of risk – if something goes wrong, you know who had access and can quickly limit exposure. Every query and action is tied to a user identity in the logs, providing traceability. If an employee abused an AI system (say by feeding it customer data improperly), you’d have the trail to address it. RBAC also allows enforcing segregation of duties: for instance, one team might develop an AI model, but need approval from another (compliance) before deploying it to production – such workflows can be set up to ensure oversight. By enabling fine-grained controls and usage visibility, Kosmoy helps firms fulfill the DORA principle of “effective internal governance and control frameworks” for digital systems.


No-Code AI Development and RAG-in-a-Box – with Guardrails Built-In: Kosmoy’s platform includes a no-code Studio and a “RAG-in-a-Box” toolkit for Retrieval-Augmented Generation, which allow teams to build AI solutions (like chatbots that use internal knowledge) quickly and in a standardized way. Why is this important for compliance? Because one of the challenges in large enterprises is “shadow AI” – different departments hand-coding their own AI pilots with no oversight or consistency. Kosmoy’s pre-built integrations, connectors, and templates centralize and accelerate development in a controlled environment. Teams don’t need to reinvent the wheel (or go off-piste writing unsecured Python scripts); they can assemble AI applications through Kosmoy’s studio with guardrails, logging, and access control inherently applied. This significantly reduces the risk of errors and speeds up the move from pilot to production – all while keeping the central governance team in the loop. For example, Kosmoy’s RAG-in-a-Box provides ready-made pipelines to ingest corporate data into vector databases and build chatbots that cite the data. This means an enterprise can leverage its knowledge base safely, with Kosmoy handling the heavy lifting of data processing and ensuring the end solution is auditable (e.g., the bot can provide sources for its answers) and version-controlled (Kosmoy supports prompt versioning and QA for AI agents). From a DORA perspective, using such a platform addresses the need for consistent ICT change management and testing – you’re not tossing unvetted AI code into production, you’re using a governed factory. It also fosters resilience: applications built with Kosmoy can be more easily tested and monitored as a group via the platform, and updated quickly if issues are found (e.g., if a new regulation comes out, you update the guardrail in one place rather than patching many disparate apps).


On-Premises and Multi-Cloud Deployment for Sovereignty & Vendor Risk Mitigation: Kosmoy offers flexible deployment options – including on-premises and multi-cloud installations – which directly aids with sovereignty and third-party risk concerns. Many financial institutions are wary of sending sensitive data to external clouds or SaaS services, both for regulatory reasons (data residency laws, privacy like GDPR) and operational risk. With Kosmoy, the entire GenAI platform can be deployed within the enterprise’s own environment or private cloud, meaning data and AI workloads stay under your control. For example, a large BPO (Business Process Outsourcer) in Europe used Kosmoy to implement a ChatGPT-like productivity tool for employees fully on-premises to meet DORA compliance requirements. By keeping the solution in-house, the BPO contained the vendor risk (no dependency on an outside AI SaaS with unknown uptime) and ensured that customer data processed by the AI never left its secure boundary – aligning with DORA’s expectations for rigorous outsourcing controls. Kosmoy also supports multi-LLM routing (through its LLM Router), allowing you to distribute AI calls across providers or swap models if one fails. This contributes to fault tolerance and load balancing – if one AI service goes down or becomes too expensive, Kosmoy’s router can failover to an alternative automatically.


In DORA terms, this is technology resilience in action: you are not reliant on a single third-party model that could become a single point of failure. Furthermore, multi-cloud support means you can deploy Kosmoy’s components in different regions or providers to meet disaster recovery and geo-redundancy goals. All of this helps fulfill DORA’s mandate to “ensure the continuous operational resilience of ICT systems” even under duress. It also addresses sovereignty concerns by allowing AI deployments that comply with local regulations (for instance, running Kosmoy on servers in the EU to satisfy data localization rules). In sum, Kosmoy’s flexibility mitigates vendor lock-in and gives the enterprise full control over the AI stack, which is exactly what a risk manager or regulator wants to see.


Built-in Compliance Filters (Addressing EU AI Act and More): DORA is one piece of the puzzle – enterprises also face the upcoming EU AI Act, GDPR, and sectoral rules. Kosmoy acknowledges this by including compliance-oriented features, such as the EU AI Act filter mentioned above. This filter effectively classifies AI use cases in real-time and prevents uses that would breach the AI Act’s Article 5 (prohibited practices). For example, if someone tried to repurpose an approved chatbot to start ranking job candidates by emotion (an AI Act no-no), Kosmoy’s guardrail would block it. Similarly, Kosmoy’s platform can support data governance requirements – ensuring, for example, that training data for AI models can be tracked and is free from certain biases, which dovetails with both DORA’s integrity focus and the AI Act’s quality requirements. By having these compliance features “out-of-the-box,” Kosmoy helps an enterprise create a unified compliance framework for AI. Rather than treating each new law separately, you enforce a broad set of AI guardrails and logging that satisfy multiple regulations. This holistic approach can be more efficient and gives comfort that as regulations evolve (or new ones like NIS2 or national AI guidelines come into play), the organization can adapt quickly using the same platform. Essentially, Kosmoy acts as a central compliance hub for AI – embedding the rules of DORA, the AI Act, and your own policies directly into the AI system’s operation.


Real-World Use Cases: Compliant AI in Action

To illustrate how this all comes together, let’s look at a couple of real-world scenarios:


DORA-Compliant AI for a BPO Provider: Consider a Business Process Outsourcing company that handles back-office operations for European banks. Such a BPO falls under DORA’s umbrella because it’s a critical ICT provider to financial institutions. One BPO with ~4,000 employees and €500M revenue needed to deploy an AI assistant for employee productivity (think an internal ChatGPT for answering questions and drafting documents), but had to ensure DORA compliance from day one. They chose Kosmoy for an on-premise GenAI solution. Using Kosmoy’s AI Gateway and Guardrails, they configured the assistant such that it would not output any sensitive client data or go off-script. RBAC was used to restrict who could access the AI (only certain departments, with fine logging of queries). The platform’s monitoring meant the BPO could provide its banking clients with reports on AI usage and demonstrate that if an incident occurred, it would be caught and reported immediately. By proactively embedding DORA’s principles – continuity plans, transparency, supply chain risk management – via Kosmoy, the BPO not only complied with the law but also gained a competitive edge. 

They could confidently say to bank customers and regulators: our AI is under control, here’s the proof. This level of assurance is key, because under DORA a lapse by a vendor can **“jeopardise both the BPO and its financial clients”*. In this case, Kosmoy’s features helped the BPO align with DORA’s stringent requirements around disaster recovery, security, and oversight, turning compliance into a selling point rather than a hurdle.


Safe AI Innovation at a Central Bank: Even central banks – typically conservative and highly security-conscious – are exploring generative AI to enhance their work. The European Central Bank, for example, identified 40+ use cases for GenAI in banking supervision, from translating natural language queries into code to analyzing regulatory documents. Closer to home, one national central bank with ~5,000 employees piloted Kosmoy to build internal AI assistants for staff engagement and to provide a public-facing chatbot for recruiting queries. Their requirements were a no-code solution (so that even non-technical policy experts could craft AI tools) and multi-LLM support for redundancy. With Kosmoy Studio, they quickly developed a chatbot that could answer employees’ HR questions, while the guardrails ensured the bot would not stray into disallowed territory (important for a central bank to avoid any reputational risk). 

The central bank ran Kosmoy in a secure cloud environment under its control and used the multi-LLM routing to pair an open-source model (running locally for privacy) with a cloud model for more complex queries – achieving both sovereignty and performance. Throughout the project, Kosmoy’s logging and dashboards gave the bank’s IT risk team full visibility. If any issue arose, they had the data to analyze it and could report upward with confidence. This example shows that even in highly regulated, risk-averse settings, generative AI can be deployed responsibly. The key is having the right platform to enforce policies, catch problems, and adapt to regulatory constraints. The result for the central bank was an innovative AI application that improved productivity, delivered under the watchful eye of a governance framework that satisfied their strict compliance checklist.


These use cases highlight a common theme: with Kosmoy, compliance and innovation go hand in hand. Instead of slowing down AI adoption, DORA (and similar regulations) become guidelines that Kosmoy helps you meet in a streamlined way. Enterprises can thus unlock GenAI’s benefits – like faster data analysis, improved customer service, and automated insights – without tripping compliance wires. In fact, many organizations find that by implementing the governance controls DORA calls for, their AI initiatives run more smoothly (fewer surprises, clearer accountability) and earn trust from stakeholders.


Bringing It All Together – Visualizing Compliance and Next Steps

Achieving DORA compliance while rolling out generative AI may sound complex, but with the right approach it becomes an opportunity to strengthen your digital strategy. To recap and assist further, here are a few ways you can visualize and plan your journey:


DORA-to-Kosmoy Mapping Diagram: Picture a diagram mapping the five core DORA requirements to Kosmoy’s features. For example, “ICT Risk Management” maps to Guardrails, RBAC, and Policy Enforcement, “Incident Reporting & Monitoring” maps to Kosmoy’s real-time alerts and logging, “Resilience Testing” maps to multi-LLM routing and QA tools, “Third-Party Risk” maps to on-prem deployment and vendor controls, and “Information Sharing” maps to Kosmoy’s audit reports and dashboards. Such a visual can help your team quickly see how each compliance piece is handled by Kosmoy’s platform.


AI Compliance Checklist: We suggest a simple checklist infographic, e.g. “5 Steps to DORA-Compliant AI with Kosmoy.” This could list steps like: 1) Inventory Your AI Use Cases & Classify Risk, 2) Apply Guardrails to Each Use Case (via Kosmoy Gateway), 3) Set Up Monitoring & Alerts (Kosmoy Insights Dashboard), 4) Establish Access Controls & Logging (RBAC and Audit Trail), 5) Test Resilience Regularly (simulate outages, review incident drill results). A visual checklist provides a practical to-do list for heads of AI or compliance officers to ensure nothing is missed in the governance process.


Downloadable Compliance Brief: For those who want to dive deeper, a “Kosmoy × DORA Compliance” PDF brief can be offered. This resource could consolidate the detailed requirements of DORA and show how Kosmoy meets or exceeds each one, with examples. It would serve as a handy reference or even something you can share with auditors and regulators to demonstrate your compliance toolkit. (Imagine having a document ready during an audit that shows: here are all the DORA controls and evidence from our Kosmoy system – that level of preparedness can make a huge difference.)


In conclusion, large European enterprises don’t have to choose between embracing AI innovation and meeting stringent regulations like DORA – they can do both. DORA is ultimately about ensuring stability and trust in the financial system’s digital fabric. Generative AI, when governed properly, can become a reliable part of that fabric. With Kosmoy’s gateways, guardrails, monitoring, and flexible deployment, organizations get the peace of mind that their AI is under control, compliant, and resilient. The result is responsible AI adoption: you unlock new efficiencies and insights, while staying firmly within the guardrails of safety and law. As we head deeper into 2025, this approach will differentiate the leaders from the laggards in financial services. Kosmoy is here to help you be on the right side of that journey – accelerating GenAI adoption, and ensuring that no matter what happens, the lights stay on and the regulators stay happy.

 
 
bottom of page