Formerly known as Wikibon

Enterprise AI Agents Move From Experiments to Operations

In a recent global survey of nearly 1,500 enterprise IT leaders, 96% say they plan to execute and expand their use of AI agents in the next 12 months, with the goal of targeting organization-wide adoption. Momentum is real, but so are the operational questions: how to move beyond proofs-of-concept, build trust, and integrate agents into everyday work without breaking governance or your SaaS strategy.

In this episode of AppDevANGLE, I spoke with James Evans, Director and Head of AI at Amplitude, about what it takes to operationalize agents at scale, why “AI will kill SaaS” misses the point, and how a practical, human-in-the-loop approach is unlocking value right now.

From Hype to Hard Reality

The adoption curve is steep, and unusually fast for such a young technology.

“You shared a stat earlier… I’ve never seen such conviction that a relatively unproven technology is going to make such an impact,” James said. “Part of that is the definition of agents is vague—AI that does things for you—who wouldn’t want that?”

When Amplitude announced Amplitude Agents in June as a research preview, the response dwarfed typical early access: “We had 10x the interest we could ever have in a traditional beta,” James noted. That appetite allowed the team to iterate quickly with real-world signals.

When to Go All In (And When Not To)

James framed the strategic bet simply: interesting data + a product constrained by “time-in-tool.”

“We have all this behavioral data,” he explained. “And like many SaaS products, there’s a small group of power users who spend a lot of time in the tool and get outsized value. It’s hard to grow that group—people only have so many hours.”

Agents flip the model from pull to push. “Instead of you getting an anomaly alert and then doing the work, we can do 95% of it for you: investigate the issue, analyze why, and suggest an action (a guide, an experiment, a fix). We don’t need full autonomy to deliver value; human approval is fine when we’ve already done the heavy lifting.” 

The Overlooked Steps of Operationalizing Agents

The hard part isn’t a demo; it’s productionizing the muscle groups that agents depend on.

Build internal conviction. “Many companies were in ‘wait and see,’ not wanting to overhype something flimsy,” James said. Amplitude made a deliberate choice to be aggressive because the conditions were right.

Learn new evaluation muscles. “Evaluating AI quality is not the same as evaluating SaaS. You need evals to define ‘good’ across scenarios. It’s a different PM muscle, and now it’s table stakes.”

Expect customer slack, and use it wisely. “Customers know foundation models are evolving fast. They’ll give you slack if you show a path to improvement and incorporate feedback quickly.”

Building Trust Without Full Autonomy

Trust grows through human-in-the-loop, not hand-waving.

“We distinguish between autonomous agents and background agents,” James said. “Our agents investigate, respond to data, and propose actions. They don’t take action autonomously. You inspect the work and click approve.”

The trust model looks a lot like a well-tuned alerting system: “If agents ping you too often with low-value items, you turn them off, just like anomaly detection when thresholds are wrong,” he added. “We frame it like hiring a specialist. They should show promise on day one, but they’ll get better as you give feedback.”

Will AI Kill SaaS? Reframing the Interface

Agents won’t erase software; they’ll reshape how we consume it.

“There’s a meme that all software collapses into a text interface,” James said. “Some apps will, especially search-style workflows where natural language shines. But browse-style and creative work will continue to need rich, visual UI. Think of Photoshop; you’re not doing that in text.”

The boundary is also blurring as agents generate UI on demand, with less “no UI” and more personalized UI.

Path to Adoption

  • Start where you have leverage: Pair agents with interesting data and workflows constrained by time-in-tool.
  • Prefer background push over full autonomy (for now): Let agents do 95% of the work and require human approval.
  • Stand up evals early: Define quality with scenario-based evaluations; treat evals as a core PM competency.
  • Tune signal-to-noise: Instrument agent notifications like anomaly detection, optimize thresholds, prioritize impact, and close the feedback loop.
  • Ship as research, iterate like product: Use research previews to learn fast, then harden for GA.
  • Codify trust and governance: Scope credentials, log actions, and align to existing risk and review practices.

Analyst Take

Enterprise agents are crossing the chasm not by magic autonomy but by pragmatic augmentation: background investigation, contextual analysis, and action proposals that humans can audit and approve. Three implications stand out:

  1. Push beats pull for time-starved SaaS
    Agents that push insights and next actions directly into a user’s flow remove the “time-in-tool” ceiling on value realization. This is where SaaS evolves, not ends. Expect personalized, generated UIs and thin, intent-driven surfaces layered atop durable platforms.
  2. Trust is an operational capability
    Treat agents like specialists under supervision. Build least-privilege access, human approval gates, complete action logs, and measurable signal-to-noise. Make evals first-class: scenario libraries, regression suites for prompts/tools, and outcome metrics (MTTA to insight, decision throughput, experiment lift).
  3. Adoption follows data gravity
    The fastest ROI appears where you have proprietary behavioral or operational data and well-understood playbooks (e.g., anomaly analysis → experiment suggestion). If your product lacks differentiated data, anchor agents to adjacent systems of record or rethink where agents should live.
  4. SaaS is changing shape, not disappearing
    “AI will kill SaaS” misses the nuance. Search-like tasks will compress into natural language; browse/creative workflows will remain visual and tool-rich. The winning pattern is agent-assisted SaaS: intent in, contextful assistance out, with UI that adapts to the job.

Leaders who operationalize agents with human-in-the-loop control, rigorous evaluation, and governed access will convert today’s enthusiasm into durable gains (faster decisions, fewer context switches, and higher product leverage) without sacrificing safety or trust.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content