Formerly known as Wikibon

Vision AI Moves From Science Project to Production Platform

Computer vision has long promised to connect software systems to the physical world. From manufacturing lines and logistics yards to agriculture, retail, and smart infrastructure, the use cases are broad and increasingly practical. Yet despite years of investment, most deployments still fail to scale beyond pilots.

Industry data cited during this AppDevANGLE conversation notes that more than 70% of computer vision initiatives never make it into sustained production because of model drift, fragile pipelines, and operational complexity.

In this episode, I spoke with Jonathan Simkins, CEO of Plainsight, about why Vision AI continues to stall in production environments, why traditional MLOps models are not enough, and how Plainsight’s newly announced VisOps platform is designed to operationalize physical AI systems at scale.

The discussion highlighted a larger market shift: AI success is no longer defined by model creation alone. It is increasingly defined by operational durability, continuous improvement, and developer accessibility.

Vision AI Breaks Faster Than Most AI Systems

One of the clearest takeaways from the discussion is that computer vision faces a unique production challenge compared with many other AI workloads.

According to Simkins, many models perform well in controlled lab settings but degrade quickly once deployed into the real world. Lighting changes, camera angles shift, environments evolve, and physical conditions rarely stay static.

“Traditional vision AI starts with high accuracy out of the lab and degrades rapidly,” said Jonathan Simkins, CEO of Plainsight.

A cattle-counting model trained on one ranch may fail on another. A warehouse detection system may struggle after layout changes. A manufacturing model may decline when packaging materials or line speeds change.

This creates a fundamental distinction between Vision AI and more static AI workloads. Computer vision systems interact directly with dynamic, messy, constantly changing environments. That means accuracy is not a one-time achievement; it must be continuously maintained.

Why Traditional Pipelines Fall Short

Many organizations still apply conventional DevOps or MLOps thinking to computer vision deployments. Simkins argued that approach misses the core problem.

Traditional software delivery often assumes a pipeline: build, test, deploy, maintain. Vision AI, by contrast, behaves more like a loop:

  • Deploy into production
  • Detect drift or unknown events
  • Capture new data
  • Retrain models
  • Redeploy improvements
  • Repeat continuously

“What Vision AI needs is more of a loop,” Simkins explained. That insight matters because many enterprises continue trying to force dynamic AI systems into static lifecycle models. The result is operational friction, expensive manual intervention, and projects that never scale.

From Model Management to Operational Intelligence

Plainsight positions its VisOps platform around treating production operations, not model experimentation, as the primary control point.

Instead of viewing errors or unknown objects as failures, the platform routes those production events into continuous retraining processes while also feeding them into customer business systems for immediate action.

Simkins described scenarios where events can be surfaced to frontline workers, dashboards, or ERP systems while the same data simultaneously improves future model performance.

That dual-loop architecture is notable because it aligns AI systems with how enterprises actually run operations:

  • Real-time business response
  • Continuous optimization
  • Cross-system integration
  • Lower manual rework

This reflects a broader trend across enterprise AI markets: production telemetry is becoming as important as model quality.

Democratizing Vision AI for Developers

Another strong theme in the discussion was developer accessibility. Simkins estimated there are roughly 20,000 computer vision specialists globally versus tens of millions of software developers. If Vision AI depends solely on niche experts, adoption will remain constrained.

“We’re building it for 25 million people, not 20,000 people,” said Simkins. That philosophy is driving Plainsight’s investment in more familiar developer workflows, including:

  • IDE-based tooling
  • APIs and command-line interfaces
  • Natural language interaction through an MCP server
  • Automated synthetic data generation
  • Browser-based management tools

This matters because the same enterprise challenges appear repeatedly across AppDev markets: complexity and skill gaps. Platforms that abstract specialized expertise into common developer experiences will likely expand faster than those requiring rare talent pools.

Open Source as a Growth Lever

Plainsight also emphasized an open ecosystem strategy through OpenFilter, its open source foundational technology.

That is strategically relevant for two reasons. First, open standards often accelerate adoption by lowering buyer risk. Second, developer ecosystems increasingly reward extensibility over closed control.

Simkins framed openness not as marketing language but as necessary for establishing a new category. That mirrors broader platform trends across cloud-native infrastructure, observability, and AI tooling: communities often validate standards before enterprises standardize purchases.

Edge AI and Physical Operations Become the Next Frontier

One of the more compelling ideas from the interview was Simkins’ view of success: Vision AI should eventually run anywhere, including remote and low-connectivity environments.

He referenced scenarios like trail cameras in remote forests. This is a simple but meaningful example of AI leaving centralized labs and entering distributed operations.

That aligns with a larger enterprise pattern:

  • AI moving closer to devices
  • Decisions occurring at the edge
  • Physical operations becoming data-driven
  • Cloud and field systems operating together

For application developers, this means future architectures must increasingly support hybrid execution models spanning cloud, edge, and operational environments.

Analyst Take

Vision AI has spent years trapped between innovation and execution. The models often worked well enough to generate excitement. The operations often failed badly enough to stop expansion.

That gap is now becoming its own market opportunity. What Plainsight is describing with VisOps is part of a larger shift across enterprise AI:

  • Models are not enough
  • Production feedback loops matter more
  • Operational resilience determines ROI
  • Developer simplicity drives adoption
  • Open ecosystems accelerate category growth

The next phase of AI adoption will not be won solely by the smartest models. It will be won by platforms that make AI reliable in imperfect real-world environments.

For Vision AI specifically, the winners may be the companies that stop treating deployment as the finish line and start treating it as the beginning.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content