In this episode of Next Frontiers of AI, Scott Hebner sits down with Joel Sherlock, CEO of Causify, to make a forward-looking call: 2026 will be the year causal AI for Decision Intelligence goes mainstream. After the generative AI surge (2022–2024) and the rise of agents and agentic workflows (2025), enterprises are hitting a hard wall: fluent systems can act, but they often cannot justify or defend consequential decisions.
This “wall” is highlighted by a new Carnegie Mellon study on how well LLMs and RAG answered over 1,600 questions using ~15,000 =retrieved documents. The results were sobering. Today’s models struggle to deliver accurate, explainable, and trustworthy answers, especially when evidence conflicts. Most concerning, the study found a 74% “faithfulness gap” where the model’s explanation does not match what actually drove its conclusion.
In this podcast, we’ll discuss how enterprises are investing to address these challenges and why knowledge graphs and Causal AI are the key enablers to delivering decision-grade AI. Joel and Scott explore how causal discovery and counterfactual “what-if” testing turn agent outputs into defensible, auditable interventions, and why this is the missing layer for trustworthy AI agents in 2026.
⏱ Chapters
08:43 – The emerging need for decisions to pass audit and compliance policies
12:02 – Why LLMs alone cannot produce reliable decisions
15:04 – The principles of causality and causal AI
21:09 – Carnegie Mellon University Study on why LLM can’t make reliable decisions
26:55 – Technical reasons LLMs are poor at decision-making
30:54 – What are the top enterprise use cases for causal AI
39:18 – How the barriers to casual AI adoption are being addressed
🔗 Learn more
📊 More Research: https://thecuberesearch.com/analysts/scott-hebner/
🔔 Next Frontiers of AI Digest: https://aibizflywheel.substack.com/welcome



