AI is reshaping the application development landscape faster than most organizations can keep up, creating enormous opportunity and significant operational risk. theCUBE Research recently sat down with Sudeep Goswami, CEO of Traefik Labs, to discuss how developers, platform teams, and enterprise IT leaders can respond to the explosion of AI-powered applications and the architectural shifts that come with them.
The rapid adoption of AI is fueling an application boom. Our recent studies at theCUBE Research show that the number of AI production applications jumped from 18% to 54% in less than a year. At the same time, the barrier to building and deploying apps has never been lower, thanks to advancements in open source tooling, microservices, and cloud-native orchestration. But this ease of entry has created a Wild West scenario where governance, observability, and policy often lag behind speed and experimentation.
“You sprinkle AI into everything and yes, even before AI, there was an influx of new tools and technologies… and now what’s happening is just accelerated at hyper-pace,” said Sudeep Goswami.
Controlling the Chaos of Accelerated Innovation
One of the key challenges we discussed is that while AI can accelerate the development lifecycle, it also amplifies complexity. It’s no longer just about securing APIs or managing service-to-service communication; it’s about securing the entire inference pipeline, ensuring accountability for AI-generated code, and governing data access at every stage.
Traditional API gateways are evolving into what Goswami called AI gateways. This concept blends authentication, observability, rate limiting, and policy management with AI-specific safeguards like data provenance, privacy filters, and semantic caching.
“In the post-AI era, that API gateway needs to manifest into an AI gateway… because now you have data privacy issues, control authentication issues, and security issues,” said Goswami.
To address these challenges, vendors like Traefik Labs are embracing modular reference architectures that help customers build incrementally, starting with open source and scaling with enterprise-grade policy controls. Goswami shared how these curated blueprints are being co-developed with key partners, like Nutanix, Oracle, Akamai, and Microsoft Azure, to offer organizations a “better together” path forward.
“They want prescriptive cookbooks or playbooks… be prescriptive, but don’t be restrictive,” Goswami emphasized. “We’re helping our customers go from sandbox to scale without getting stuck.”
AI Inferencing, Edge Deployment, and API Proliferation
A key thread in our conversation was the growing demand for AI inferencing at the edge. As organizations bring smaller, optimized AI models into local environments, from branch offices to smart cities, the need for lightweight, portable runtime environments increases. Goswami sees semantic caching and distributed inference infrastructure as crucial levers for reducing costs and delivering low-latency experiences.
With every deployed AI model comes a proliferation of APIs. That proliferation creates another layer of complexity (and risk) around lifecycle management, governance, and observability. It also reinforces the importance of having the proper API infrastructure and management stack from day one.
“The more AI you deploy at scale, the more APIs you have,” Goswami explained. “And the more APIs you have, the more API management problems.”
From Testing to Production, One Leg at a Time
There needs to be a focus on the importance of progressive maturity. Organizations shouldn’t feel pressure to leap into full production AI architectures overnight. Traefik Labs encourages starting with open source tooling, experimenting in a sandbox, and gradually adding capabilities as confidence grows.
“It’s a three-leg journey,” Goswami noted. “Start with open source components… then move to modular extensions… and eventually scale up with full enterprise controls.”
Looking into next year, Goswami hopes to see more success stories and fewer blockers. That includes more community-driven reference architectures, continued collaboration between ecosystem partners, and a deeper understanding of how AI, APIs, and app development workflows intersect.