The News
Kong Inc. has released its 2025 research report, “What’s Next for Generative AI in the Enterprise,” offering detailed insights into the adoption of large language models (LLMs) across enterprise environments. The findings show a rapid rise in LLM investment, with 72% of companies planning to increase spending this year and nearly 40% expecting to invest over $250,000.
To read more, visit the original press release here.
Analyst Insights
Generative AI is becoming central to enterprise digital strategy, with development teams using LLMs for code generation, automation, and data analysis. Yet, as highlighted in the report and supported theCUBE Research, AI infrastructure maturity is lagging behind enterprise ambition. While developers are eager to adopt GenAI to improve velocity and innovation, concerns about security, integration, and cost persist. Kong’s report shows that 44% of enterprises cite data privacy and security as the top barrier to LLM adoption, underscoring a growing need for secure, scalable, and composable infrastructure tailored for AI workloads.
Implications for Developers and Enterprise Engineering Teams
The report reveals two key takeaways for developers: LLMs are already reshaping workflows, and organizations are willing to invest heavily in making them usable at scale. Platforms like Microsoft Azure AI, OpenAI, and Google Vertex AI are seeing increased usage in enterprise contexts, while open source models and hybrid strategies are also gaining ground. Tools like DeepSeek, despite privacy concerns, are seemingly drawing significant developer interest, indicating that demand is outpacing risk tolerance. Developers are using LLMs to automate repetitive tasks, test APIs, write documentation, and accelerate deployment cycles, which could make LLM literacy a competitive advantage in modern engineering teams.
Previous Challenges Around Adoption Still Hold True
Enterprises have long faced hurdles when integrating emerging technologies with legacy architectures, and GenAI is no exception. Only 14% of survey respondents cited integration complexity as a top challenge, but in practice, embedding LLMs into existing systems often requires rethinking service orchestration, compliance policies, and infrastructure monitoring. Furthermore, cost remains a key concern: 24% flagged budget limitations due to the high demand for compute, storage, and fine-tuning of LLMs. Developers working within these constraints may need modular tooling and APIs that offer observability, governance, and seamless plug-in to their current CI/CD workflows.
A Shift Toward Enterprise-Grade AI Platforms and Provider Diversity
The findings suggest a maturing AI market where enterprise buyers are becoming more discerning. With 63% of users preferring paid enterprise versions of LLMs and 31% citing security as the top selection factor, vendors may need to deliver not just accuracy but also trust, transparency, and integration capabilities. The report notes a trend toward provider diversity, with usage split among Microsoft, OpenAI, Google, and emerging open source alternatives. Developers are at the center of this shift, tasked with evaluating models, ensuring responsible AI usage, and stitching together hybrid infrastructures that balance innovation with control.
Moving Forward
As AI transitions from experimentation to mainstream enterprise infrastructure, the role of the developer is evolving from tool consumer to platform architect. Industry research predicts that by 2026, more than 70% of enterprises will demand AI solutions that are sovereign, secure, and infrastructure agnostic. Kong’s report affirms that LLMs are already embedded in core business processes, but their full potential may only be realized when the supporting infrastructure is transparent, adaptable, and compliant.
oving forward, we expect enterprises could explore platform-native integration of GenAI capabilities directly into their API ecosystems, enabling more intelligent services at the edge, more personalized experiences, and tighter AI policy enforcement. For developers, this could mean investing in AI pipeline skills, model benchmarking, and hybrid orchestration techniques will be key to long-term success.