In 1986, this author met Danny Hillis, a recent graduate from MIT who was building one of the world’s fastest computers. He was wearing a bright green T-shirt with all these cubes, connected in a network. When asked about the design of the shirt, Hillis said it was meant to represent a massively parallel architecture that could simulate a neural network and think like a human.
The moment felt like a glimpse into the future. Thinking Machines had just introduced the CM-1 Connection Machine, an iconic massively parallel supercomputer designed to tackle problems that couldn’t be performed by classical von Neumann computers. The Connection Machine was used for early AI research, scientific modeling, and even stock market simulations. The packaging was unforgettable and eye-catching (see below). It was a black monolith of connected cubes that translated logical parallelism into a physical feel. And yes, those of us in attendance received a green T-shirt with the connected-cube logo, a piece of swag that signaled we were in the know to a new computing era.

Nearly four decades later, VAST Data is using the phrase “Operating System for the Thinking Machine” in a very different era – but the symbolism is purposeful. The industry is once again confronting a compute inflection where the architecture of the system matters more than the performance of any single component. This time, however, the cost and scale increase the chances that the “thinking” isn’t a research experiment. AI is becoming an enterprise reality where models that reason over proprietary data, machines compute recursively, learning is reinforced and agents take action.
Thinking Machines died in the 1990s, caught in a viscous cycle of superior microprocessor-based price performance declines that obliterated big iron. Today, that price performance benefit has become a tailwind for those companies positioned to ride the wave. Not only is the compute widely available (notwithstanding supply constraints), but the economics favor companies like VAST that have inserted themselves directly into the AI data stack.
That is why VAST Forward – VAST Data’s first-ever customer event in Salt Lake City – is an event worth noting. It’s not because the industry needs another storage conference, rather the market is forcing a new layer into existence. Specifically, a software-defined substrate that sits between raw data and the agentic systems acting on it. VAST’s vision is to position that layer as a type of OS…not just in a marketing sense but functionally.
What Danny Hillis started is even more relevant in 2026
The Connection Machine embodied a concept of continuous learning. The Connection Machine wasn’t just a faster computer. It was a different mental model that brought together massive parallelism, novel interconnects, and a programming model aimed at classes of problems that needed concurrency by design.
Fast forward to today and we see an eerily similar shift. AI workloads, especially retrieval-heavy and agentic ones, force a tight integration between data, metadata, events, and actions. We’ve seen that rushing to market with GenAI, RAG-based chatbots and vector search didn’t really do much in terms of moving the needle on enterprise value. VAST’s vision of a “thinking machine” is not longer a single exotic box, it’s a distributed system spanning data centers, clouds, sovereign footprints, and edge environments. And VAST hopes that reality is pulling the market toward a new OS-like control layer.
Connecting the dots from last May: VAST is not a box vendor
In May 2025, we wrote that VAST was attempting one of the most audacious moves we’ve seen from an infrastructure vendor in the last decade – moving from a high-performance, shared-everything flash architecture into a broader software platform with eventing, vector indexing, database-like services, and agent primitives. We also made a key point that VAST isn’t a lakehouse, it’s not Snowflake or Databricks, and it’s not a hyperscaler data platform. But it also isn’t a traditional storage array vendor competing on box speeds and feeds.
VAST’s bet is the market is moving beyond legacy storage procurement models and toward outcomes for AI system outcomes. Specifically KPIs around time-to-context and things like AI trust. A key for enterprise adoption in our view is how to support the building of AI systems on proprietary data, while incorporating determinism, governance, and audit-ability and leveraging probabilistic systems where appropriate.
Trust is becoming as important as compute for enterprise AI
The biggest shift since last year is that enterprise AI adoption is increasingly constrained by trust, not model quality or even GPU supply. Enterprises are asking questions that sound boring but are important in practice:
- Who (or what) is allowed to do what, with which data, using which tools?
- What exactly happened inside the system, and can it be audited?
- What information was exposed, transformed, or inferred on the way to an answer?
- If the system improves over time, can the organization prove that learning was actually allowed?
When you think about it, these are OS questions. They are about adjudication, policy, logging, provenance, and determinism. That’s why an “Operating System for the Thinking Machine” marketing has great potential if the platform can enforce trust across agents, tools, data and metadata.
AI forces a new security posture where governance must not only controls access but also actions. It’s no longer enough to know who can read a file. The enterprise must be able to govern what an agent is allowed to do with that information, what tools, APIs and applications it can call, what it can remember, what it can share with other agents, and how it is allowed to evolve over time.
An AI data substrate is forming between storage and the lakehouse
As we’ve stated many times, the software stack is creating new layers, including the system of intelligence (Soi). This layer is fed by an AI data substrate that sits between raw storage and modern data platforms. It exists because AI systems need something the classic stack wasn’t designed to provide, specifically governed, low-latency, auditable semantics that can be used repeatedly across many models and many agent workflows without proliferating data copies and violating governance edicts.
The AI data substrate has multiple requirements that the market will demand, including: 1) Embedded policies that protect proprietary data – i.e. governing what the model sees; 2) An immutable audit trail that can be accessed and validated; 3) Continuous learning that is governed in a fenced off environment; 4) Run anywhere (supercloud). The capabilities of an OS for the thinking machine must abstract the underlying complexity of cloud primitives, on-prem configuration nuances; and it must be able to accommodate sovereign requirements.
This is the thread we’re seeing that connects between last May’s thesis and the market’s current action. In May 2025, we emphasized productivity and GPU utilization through collapsing layers. In 2026, the market adds another constraint around AI trust.
Where VAST fits and its unique position in the market
This is where VAST’s positioning becomes highly differentiated in our view. Despite some marketing bravado a couple years back, we don’t see VAST as displacing the lakehouse incumbents’ analytics ecosystems and developer affinity. But it doesn’t need to. The lakehouse/warehouse platforms will remain dominant for many analytics and BI workflows. Hyperscalers will continue to offer broad AI service catalogs with massive network effects.
But the AI era is creating a different battleground. Specifically, the semantic layer that feeds the SoI and the supporting models and agentic layer. Models will continue to improve and commoditize. What won’t commoditize is the ability to turn proprietary enterprise data into governed, retrievable, auditable context for agentic workflows at scale, across footprints, with a credible security posture.
In our view, VAST is aiming to be a foundational ingredient in that layer. The company’s differentiation isn’t a better storage mousetrap. It’s that it’s building a software-defined substrate that compresses the distance between data and intelligence, and that can operate wherever the customer needs it to operate—on-prem, sovereign, public cloud, or GPU cloud.
That’s also why the “not a box vendor” point we made last May is so relevant. A box is local. An AI OS is inherently distributed. If the market is moving toward systems that require a control plane, policy enforcement, audit-ability, and large scale operations, then the winners will be those that emphasize software innovation and leverage hardware innovation, wherever it comes from.
The Hillis replay: A modern system built for the problems of its time
The Connection Machine was iconic not just because it looked different, but because it represented an ethos. Specifically that there are classes of problems requiring different thinking. In our opinion, the agentic era is creating a similar dynamic. The enterprise doesn’t just need AI features du jour sprinkled across a fragmented stack. It needs a governed, operable, closed-loop system that can mediate agentic action, preserve trust, and continuously improve within policy constraints.
That’s where the “Operating System for the Thinking Machine” has potential in our view. If VAST can translate its marketing into tangible business value – e.g. policy enforcement, real audit, and true operational control at scale, it has a winning formula that sets up for a big IPO. If it doesn’t, it becomes just a clever gimmick.
Bottom line
This author remembers the Connection Machine as a symbol of a grand architectural vision, one that was far ahead of its time. But the vision rhymes today because it forces the industry to confront the difference between faster components and a total system. In 2026, the AI industry is facing a dual challenge. Agentic systems will fail on architectures that are brittle, ungoverned, and impossible to audit. The market is clearly demanding platforms that can make machine intelligence trustworthy and operable anywhere — across footprints, and workflows that increasingly take action.
In our opinion, VAST’s opportunity is to own a distinct role in that emerging stack. It’s not a lakehouse, not a cloud database, and not a storage array. Rather it’s an AI data substrate and control layer that turns enterprise data into governed, retrievable, auditable context for distributed intelligence. If VAST can maintain execution discipline while moving up the stack – without sacrificing reliability and simplicity – it has a credible shot at making the “thinking machine OS” feel less like a marketing tagline and more like an architecture enterprises rely on at scale.
Note: theCUBE will be on site, broadcasting live, thanks to the generous support of Solidigm.

