Only a small percentage of healthcare AI projects ever reach production.
Despite billions invested into healthcare AI over the last several years, most deployments remain trapped in pilot phases due to trust, governance, compliance, and operational reliability concerns. Healthcare environments introduce challenges that most enterprise AI systems were never designed to handle: complex terminology, fragmented legacy infrastructure, evolving clinical guidelines, and near-zero tolerance for hallucinations or workflow failure.
In this episode of AppDevANGLE, I spoke with Lars Maaløe, Co-Founder of Corti, about what it actually takes to operationalize AI safely in healthcare environments and why governed agentic infrastructure may become the defining architectural layer for the next generation of healthcare AI systems.
Our discussion explored deterministic orchestration, agentic safety frameworks, interoperability standards, and why healthcare may ultimately force the AI industry to mature faster than any other vertical.
Healthcare AI Breaks Traditional Large Language Model Assumptions
Healthcare introduces a level of complexity that general-purpose AI systems struggle to handle reliably.
“If I knew going into healthcare how difficult healthcare really is, then I would never have done it,” said Lars Maaløe, Co-Founder of Corti.
Unlike consumer AI use cases, healthcare operates on highly specialized terminology, rapidly evolving clinical guidance, and massive structured classification systems like ICD-10 and ICD-11. The domain also carries strict regulatory, compliance, and patient safety requirements.
Maaløe explained that generic LLM behavior becomes dangerous in these environments because healthcare workflows cannot tolerate probabilistic reasoning without validation.
“They are failing when it comes to these very specific, expert-driven healthcare examples,” he noted.
This creates a critical architectural requirement: healthcare AI systems cannot rely solely on foundation models. They must operate within governed frameworks that continuously validate outputs against trusted clinical knowledge and deterministic rules.
Deterministic Orchestration Becomes the Foundation for Safe AI
One of the central themes of the discussion is that healthcare AI must be grounded, traceable, and verifiable at every step of execution.
Corti approaches this through deterministic orchestration layers that validate agent actions against trusted clinical data sources before downstream systems are impacted.
“We make sure that your agentic system… is only harvesting from that data deterministically, such that you are not suddenly risking a lot of hallucinations,” Maaløe explained.
This becomes especially important in high-volume healthcare operations such as:
- Nurse triage workflows
- Revenue cycle management
- Clinical documentation
- Prior authorization systems
- Medication reconciliation
Unlike consumer applications where hallucinations may simply produce incorrect answers, healthcare AI failures can propagate through electronic health record systems, billing workflows, referrals, and patient care decisions.
The result is that healthcare AI requires a fundamentally different infrastructure model where safety mechanisms are embedded directly into the execution layer.
Agentic Failures Compound Across Healthcare Systems
As healthcare organizations move toward multi-agent AI systems, the operational risks increase significantly.
“If an error is happening in a long workflow… those errors compound,” Maaløe said.
This introduces a challenge many enterprise AI systems are still struggling to solve: preventing cascading hallucinations and maintaining grounded reasoning across interconnected workflows.
Corti addresses this through recursive fact extraction and graph-based validation systems designed to continuously compare generated outputs against source-of-truth clinical data.
“We are able to put them into a graph setup, such that we can ensure that we have validated those facts against the ground truth,” Maaløe explained.
Rather than allowing AI systems to generate probabilistic assumptions unchecked, this architecture continuously validates whether reasoning remains grounded in verified patient and clinical information.
This shift is important because healthcare AI is increasingly moving from advisory tooling into operational execution.
Open Standards and Interoperability Become Strategic Requirements
Healthcare’s long history of vendor lock-in has created significant resistance to closed AI ecosystems.
Maaløe emphasized that healthcare AI infrastructure must be modular and interoperable if organizations are going to trust it long term.
“We need to build on the open standards first and foremost,” he said.
This includes support for:
- MCP-style interoperability frameworks
- Agent-to-agent communication standards
- Modular AI orchestration systems
- Open integration layers across healthcare applications
The goal is to avoid repeating the same lock-in problems that healthcare organizations experienced with legacy EHR systems.
Healthcare providers increasingly want the flexibility to replace, extend, or integrate AI systems without rebuilding their entire operational stack. This is particularly important as AI innovation cycles accelerate faster than traditional healthcare procurement and modernization timelines.
2026 May Become the First Real Year of Healthcare AI Production
One of the more important takeaways from the discussion is that the market may finally be shifting from experimentation into production deployment.
“If I was a healthcare provider system, I would never trust any agentic system that is not ours right now,” Maaløe stated bluntly.
While intense, the comment reflects a broader market issue: many AI frameworks still lack the deterministic execution, traceability, and memory architectures necessary for regulated environments.
According to Maaløe, several core problems continue to limit production adoption:
- Weak contextual memory handling
- Poor traceability of agent decisions
- Limited deterministic routing between agents
- Inability to safely process large patient records
- Lack of strict typing and structured data governance
As a result, 2025 largely became a year of experimentation and infrastructure building. The expectation is that 2026 becomes the first real scaling phase for governed healthcare AI systems.
AI Is Expanding Beyond Documentation Into Operational Labor
The conversation also highlighted how healthcare AI is evolving from productivity tooling into operational workforce augmentation.
Administrative healthcare inefficiencies remain one of the largest opportunities in the industry, with billions tied to manual workflows, reimbursement processing, and documentation burdens.
Maaløe described how AI agents are increasingly being deployed across:
- Patient intake and triage
- Clinician-to-clinician communication
- Revenue cycle workflows
- Insurance and payer operations
- Clinical documentation integrity (CDI)
- Medication analysis and reconciliation
This reflects a broader shift occurring across enterprise AI markets: agents are increasingly being evaluated as operational labor systems rather than simple software tools.
The implications are significant because healthcare labor shortages, operational costs, and burnout continue to intensify globally.
Analyst Take
Healthcare may become the industry that forces AI infrastructure to mature. Most enterprise AI systems today still tolerate ambiguity, probabilistic behavior, and inconsistent execution. Healthcare cannot. The operational, regulatory, and patient safety stakes are simply too high.
That pressure is accelerating the emergence of a new AI architecture model centered around:
- Deterministic orchestration
- Continuous validation against trusted data
- Traceable agent workflows
- Structured memory systems
- Interoperable governance frameworks
The larger implication extends beyond healthcare itself. As enterprises across industries move toward agentic workflows, many of the same requirements—trust, auditability, grounded reasoning, and operational control—will become universal expectations. Healthcare is simply encountering those realities first.
The organizations that solve governed AI execution in healthcare may ultimately define the infrastructure patterns the rest of the enterprise market adopts over the next several years.

