HEALTHCARE

Clinical AI inside the patient record, not next to it.

Prior authorisation drafting. Medical coding from clinical notes. Patient intake with a clinical hand-off the moment it matters. Drug interaction lookup for pharmacists. Every interaction PHI-redacted at the gateway, every model call audited.

Healthcare's AI moment is operational before it is clinical. The administrative load — prior authorisation, medical coding, claim resolution, patient navigation — consumes 30%+ of provider OPEX in the US system and a significant fraction elsewhere. Generative AI is well-suited to compress that load. The clinical use cases come next, with sharper safety expectations.

Regulators have moved fast. The EU AI Act classifies clinical decision support and triage AI as high-risk under Annex III. The MDR and FDA SaMD frameworks bring software-as-a-medical-device into a formal regulatory pathway when AI affects clinical decisions. HIPAA in the US and GDPR + national health data laws in the EU constrain how PHI flows through AI tooling. The Italian DM 77/2022 reshaped community care with implications for AI-assisted patient navigation.

Kosmoy is the operating layer between the provider/payer's apps and the AI they call. Deployment is single-tenant inside the institution's own cloud or data centre. PHI never reaches the model provider; the AI Gateway redacts it before egress. Every clinical AI agent is registered with its EU AI Act risk class and its FDA / MDR pathway as applicable.


What this industry runs into.

PHI exposure

Clinical notes, medical record fragments, patient identifiers — these flow through almost every clinical AI workflow. A single prompt to a public LLM is a HIPAA / GDPR breach.

Hallucination in a clinical context

An LLM that invents a drug interaction or extrapolates beyond the clinical note can cause direct patient harm. The guardrails have to be tighter than commercial standards address.

EU AI Act high-risk classification

Clinical decision support, triage, medical device software — once any of these incorporate AI, the high-risk regime applies. Lifecycle controls, human oversight, post-market monitoring all become mandatory.

Heterogeneous data — EHR, claims, devices

Clinical AI has to read across the EHR, claims, lab, imaging, device telemetry and the literature. Each store has its own access pattern and consent profile.


Regulatory landscape.

The regulations that shape AI in healthcare — and where each one bites on AI deployment.

EU AI ActRegulation (EU) 2024/1689· EU

Clinical decision support and triage AI fall under high-risk classification. Mandatory risk management, data governance, human oversight, transparency, accuracy and post-market monitoring.

HIPAAHealth Insurance Portability and Accountability Act· US

PHI cannot flow to AI vendors without a Business Associate Agreement and minimum-necessary controls. PHI redaction at the gateway is the standard implementation pattern.

GDPR + national health data lawsGDPR + Italian Codice Privacy Sanità, French DM Santé, etc.· EU + national

Health data is special-category. AI processing requires lawful basis, data minimisation, often consent. Cross-border transfers severely constrained.

MDRMedical Device Regulation (EU) 2017/745· EU

Software-as-medical-device covers AI that affects clinical decisions. CE marking required; conformity assessment process applies.

FDA SaMDSoftware as a Medical Device — FDA framework· US

Risk-tier classification (Class I-III). Predetermined Change Control Plans for AI/ML SaMD. Lifecycle requirements span the deployment.

Italian DM 77/2022Decree on community-based care reorganisation· Italy

Reshaped Italian community healthcare delivery. AI-assisted patient navigation and triage now operates within new structural rules.


Use cases that are actually shipping.

Prior authorisation drafting

Physician's office submits a PA request for a high-cost imaging study. The agent reads the clinical note, identifies the relevant CPT/HCPCS codes, looks up the payer's PA criteria, and drafts the submission with the clinical justification mapped to the criteria. The clinician reviews and signs; the agent never invents a clinical justification.

PA submission cycle compresses from 30–45 minutes to 5–10 minutes per case. PA approval rate rises 10–20% on first submission because the criteria mapping is consistent. Provider burnout on PA work decreases measurably.

Medical coding from clinical notes

Inpatient discharge note flows through the coding queue. The agent reads the note, identifies primary and secondary diagnoses, comorbidities, procedures and complications, and proposes ICD-10-CM and CPT codes with the specific note language that supports each code. Coder reviews and finalises; the agent never extrapolates beyond the note.

Coding throughput rises 30–60% per coder. DNFB (discharged not final billed) days drop. Audit defensibility on coded claims improves because the supporting note language is captured per code.

Patient intake and navigation chatbot

Patient on the portal: 'I've had chest pain for two days, what should I do?'. The chatbot collects symptoms in a structured intake — onset, radiation, severity, associated symptoms, risk factors — and routes: urgent symptoms route to ED guidance with explicit hand-off, non-urgent to scheduling, administrative questions to self-service. Never gives a diagnosis.

Triage hand-off accuracy rises because the structured intake captures the right signals. Patient satisfaction on navigation improves; clinical staff time on intake collection drops.

Clinical note summarisation for handoff

Night-shift to day-shift handoff in an inpatient unit. The agent reads the prior 12 hours of notes, vitals trends, lab results and active orders, and produces a structured handoff summary per patient — what's stable, what's changing, what needs attention. Clinician edits and signs.

Handoff time per patient drops from 5–7 minutes to 2–3, with the structured format catching items that human handoff frequently missed. Adverse event rate on shift transitions drops measurably.

Pharmacist drug interaction Q&A

Pharmacist verifying an order: 'are there interactions to watch for warfarin + amiodarone in an 80-year-old with renal impairment?'. The agent retrieves from the institution's drug information resources, summarises the relevant interactions with citations, and returns the clinical recommendations. Pharmacist decides; the agent never substitutes for the pharmacist's judgement.

Verification time on complex polypharmacy cases drops by half. Citation completeness improves because the agent's source list is structured and visible.


Agent governance

Where healthcare agents need extra discipline.

Healthcare agents read the most sensitive data class in the platform's catalogue: PHI. Kosmoy's PHI guardrails redact patient identifiers before any prompt reaches an external model; the redaction is reversible only inside the institution's perimeter so downstream review tools can re-associate context. Every clinical AI agent is registered with its EU AI Act risk class and its MDR / FDA SaMD pathway. The dossier the regulator asks for is generated, not hand-assembled.

Clinical decision support is where the Action Capsule matters most. An agent that suggests medication doses, triage outcomes or imaging interpretations runs in a contained environment with explicit allowed-action scope. Pre-flight authorisation and post-output validation are built into the pipeline. Human oversight is not a checkbox — it's a structural constraint enforced by the platform.


Chatbot use cases

Chatbots, by surface and risk class.

Healthcare chatbots span patient-facing (portal, intake, scheduling), clinician-facing (note search, drug Q&A, formulary lookup), and administrative (claims, eligibility, prior auth). Each carries different PHI exposure and different regulatory class.

Patient portal — administrative

Appointment scheduling, medication refill requests, bill questions, results inquiry. Strict guardrails: never gives a diagnosis, hands off to a clinician for clinical questions.

Patient triage — symptom intake

Structured symptom collection feeding into triage decision support. Always hand-off to a clinician for any clinical determination. Logged with full conversation transcript for chart inclusion.

Clinician copilot — note search

'Show me the patient's last echo finding.' Semantic search over the EHR through MCP Gateway with PHI guardrails. Citation-grounded; never extrapolates beyond the note.

Patient access / financial counselling

Eligibility checks, financial assistance program qualification, payment plan setup. PHI-aware guardrails; never quotes a balance the agent can't verify.


How Kosmoy fits.

Healthcare is one of the strictest deployment environments Kosmoy supports. Single-tenant, in the institution's own cloud or data centre, often air-gapped. Open-weight models (Llama, Mistral) running on institutional GPUs are common because no PHI flows to a third-party model provider. Where commercial models are used, Azure OpenAI and AWS Bedrock private endpoints with zero data retention are the typical pattern, with the AI Gateway enforcing the egress guardrails.

The AI Inventory carries the EU AI Act and MDR / FDA SaMD dossiers across every clinical AI in the institution's portfolio. The Insights Dashboard captures post-market monitoring metrics — drift, accuracy, escalation patterns. Agents are registered with full lineage from build through deployment through retirement.


Module questions, answered straight.

How is PHI kept out of LLM provider context?

PHI guardrails redact identifiers at the AI Gateway before any prompt egress. Where local models suffice (note summarisation, structured extraction, common Q&A), prompts never leave the perimeter. Where commercial models are used, redaction is mandatory and zero-retention endpoints are required.

Can Kosmoy support our SaMD compliance pathway?

Kosmoy itself is not a medical device — it's the management platform around AI systems, some of which are. The AI Inventory captures the SaMD class, conformity status and predetermined change control plan for each clinical agent. Audit pull for FDA / Notified Body inspections is straightforward.

Does the patient triage chatbot give medical advice?

No. The chatbot collects structured symptom data and routes — to ED, to a clinician, to scheduling, to self-service. It never offers a diagnosis or treatment recommendation. Hand-off to a clinical user is the default for any clinical determination.

Can we run on-prem only?

Yes — and most providers do for their core clinical AI. Kosmoy deploys on-prem or in private cloud with no outbound dependency. Open-weight models on institutional GPUs cover most clinical use cases without ever calling out.

Run clinical AI inside the patient record's perimeter.

See how Kosmoy redacts PHI at the gateway, contains clinical agents, and produces the EU AI Act / MDR / SaMD dossier as the system runs.