Formerly known as Wikibon

Truxt.ai Unlocks DevOps Intelligence with AI Analytics

Truxt.ai Unlocks DevOps Intelligence with AI Analytics

Truxt.ai is transforming software delivery by bridging the gap between DevOps metrics and business outcomes. Learn how its AI-powered analytics platform drives real-time insights, root-cause diagnosis, and action-ready recommendations for engineering teams.

289 | Breaking Analysis | Reframing Jensen’s Law: Buy More, Make More and AI Factory Economics

We believe the industry’s broad interpretation of Jensen’s Law — i.e. “Jensen’s Law accelerates Moore’s Law” — understates a fundamental economic reality of AI Factories. Jensen Huang’s own language and NVIDIA’s operating model point to a financial law of motion for AI factories: When power is the binding constraint and demand is elastic, perf/watt increases raise monetizable throughput faster than they raise total cost. The result is an extension to existing laws of tech that we’ve come to know.

Specifically, Jensen’s invoking of Buy ore to make more (i.e. revenue up) and spend more to save more (i.e. unit cost down) has explicit economic implications that we believe need further examination. In addition, there is a corollary in this new regime in that, under certain conditions, fabric‑driven utilization unlocks are so valuable that high‑speed networking becomes “economically free” (i.e. where utilization gains exceed amortized fabric cost).

In this Breaking Analysis we explain in detail what we’re referring to as a new Jensen’s Law. We’ll explore why this phenomenon is so important, review the math behind it, share some concrete examples of where the law is applicable, when the corollary of “the network is free” holds, implications for investors and operators; and where the law is not applicable.

Navigating the AI Talent Crisis: Act Now Before It’s Too Late!

In this episode of Next Frontiers of AI, host Scott Hebner is joined by Justice Erolin, Chief Technology Officer at BairesDev, to confront one of the defining challenges of 2025: the AI talent crisis. The demand for AI-skilled professionals is already outpacing supply by a factor of 2.3 times, with job openings growing 10 times faster than the number of new entrants into the field. Seventy percent of enterprises report struggling to find qualified AI talent, while 40% of existing AI-skilled employees are considering leaving their jobs. The result is a spiraling competition for scarce talent, with companies paying salary premiums of 47% to 200% in an unsustainable arms race.
This episode examines what lies beneath that crisis—and why the solution isn’t just about pursuing human capital. A key shift is happening as software developers become data scientists and AI engineers, broadening their skills while using agentic AI systems that allow them to design and deploy models without requiring deep expertise. Simultaneously, AI talent augmentation is becoming inevitable, as enterprises blend flexible outsourcing with AI agents serving as virtual specialists.

Decoding NVIDIA’s AI Factory Product Maze

NVIDIA’s Q2 FY26 earnings call underscored once again that the company is not just a GPU vendor, rather it is an AI infrastructure company supporting the buildout of AI Factories. With revenue hitting $46.7B for the quarter and data center sales accelerating, the product portfolio is expanding so quickly that even seasoned observers struggle to keep the names straight. To help make sense of the landscape, we’ve compiled a cheat sheet mapping NVIDIA’s sprawling platforms, where they sit in the roadmap, and how much revenue they’re driving.

Elide is Reimagining Runtimes for Modern Development

Elide is Reimagining Runtimes for Modern Development

Elide is a polyglot runtime built on GraalVM that reimagines Node.js for the modern era. With support for Java, Python, Kotlin, and Ruby, it delivers up to 75x performance gains and a unified developer experience across languages.

Special Breaking Analysis: Inside the AI-Networking Fabric Debate – Why Purpose-Built is Winning and Why Openness Still Matters

We believe the center of gravity in AI infrastructure has shifted from servers to AI factories, where networking is perhaps as critical as compute. Our analysis of an interview with NVIDIA SVP of Networking, Gilad Shainer, indicates NVIDIA’s thesis is straightforward. Specifically, that scale-up fabrics (NVLink/NVLink Fusion) plus scale-out Ethernet purpose-built for AI (Spectrum-X) — and increasingly scale-across for multi–data center topologies (Spectrum-XGS) — deliver superior determinism, efficiency, and time-to-outcomes at giga-scale. At the same time, the market is too large and heterogeneous for any single fabric to dominate; open standards and merchant Ethernet will continue to win broad adoption, and even NVIDIA is embracing open interfaces and ecosystems to complement its proprietary advantages. In our view, NVIDIA is a somewhat rare case where first-mover advantage has paid off. Its early conviction in parallel computing, GPUs, CUDA/NCCL software moats, and the Mellanox acquisition now underpin a defensible systems position across scale-up, scale-out, and (increasingly) scale-across.

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content