ABSTRACT: The Future of Data Platforms is being defined by the convergence of storage intelligence, metadata-driven governance, and agentic AI readiness. End-users like FICO and the NFL stress transparency, modularity, and adoption, while vendors such as Hammerspace, HPE, and Dell are embedding intelligence at the storage and services layers through open, composable architectures. The market is rapidly evolving toward unified control planes where humans and AI agents can act seamlessly on trusted, retrieval-ready data.
Introduction
Data platforms are no longer optional infrastructure; they are becoming the organizing principle of modern enterprises, particularly as AI and agentic systems reshape how businesses operate. A true data platform is a cohesive, scalable system that enables organizations to store, process, manage, and utilize data across a wide range of services and applications. It provides a unified environment where ingestion, transformation, governance, analytics, and operationalization can occur through layered technologies and open integration protocols.
This research note synthesizes insights from recent conversations at the Future of Data Platforms Summit with data platform builders and user organizations, FICO, the National Football League (NFL), HighFens, and technology providers, Hammerspace, HPE, and Dell Technologies. These perspectives illuminate where enterprises are struggling, how technology vendors are reimagining the storage and data services layers, and how the market is evolving toward metadata-driven governance, compliance, and AI-readiness.
FICO: Decision Intelligence Built on Data as a Service
FICO, long known for its credit scoring and analytics expertise, has built its FICO platform on the principle that data is foundational to decision intelligence. Bill Waid, CPTO, emphasized that FICO treats its data platform as a service at a global scale, handling petabytes of data and supporting hundreds of customers. Resilience, privacy, and regulatory compliance are non-negotiables, but the real differentiator is FICO’s ability to transform raw data into predictive features that power real-time decisioning.
FICO’s model stresses that not all data is equally valuable. Instead, the platform filters and aggregates inputs into features that matter most for business outcomes. These features feed decision processes that must occur in milliseconds, a stark reminder that data platforms must simultaneously support scale and speed.
Waid also underscored the role of open architectures. Customers rely on third-party data providers, meaning FICO’s platform has to unify disparate sources through cataloging, deduplication, and linkage—the data platform activities that are key to making the data usable. The FICO approach to “householding” and open interfaces for semantic consistency help customers stitch together internal and external datasets into a coherent fabric. This makes the semantic and metadata layers mission-critical, as they provide business context on top of technical lineage.
Finally, FICO’s customers value transparency, being able to trace decisions from data ingestion to outcomes. This is increasingly relevant as agentic AI systems take hold, where decision-making must be explainable and governed. For Waid, the ultimate evolution of data platforms lies in linking features, models, and business outcomes in a continuous feedback loop, enabling rapid simulation, hypothesis testing, and iterative improvements.
NFL: Building a Modular Ecosystem
Data is at the heart of fan engagement, game analysis, and operational excellence for the National Football League (NFL). Joe Steinke, Senior Director of Data Management, described the league’s journey from siloed data systems toward a centralized analytics ecosystem.
The NFL organizes its approach around five pillars:
- Governance – embedding quality and controls at the “front door” of the platform.
- Engineering & Architecture – building scalable, modular infrastructure.
- Solutions Architecture – tackling complex integrations.
- Data Operations – 24/7 support and SLA-driven processes.
- Analytics Enablement – aligning product owners, analysts, and BI teams with stakeholders.
The league intentionally built around high-value use cases, modularizing workflows to avoid rigid architectures. Automation was central: every pipeline is deployed with minimal manual intervention, shortening time-to-value while reducing error.
Critically, the NFL prioritized metadata and governance early. By standardizing models and definitions, the league avoided the trap of inconsistent measures across silos, such as multiple versions of “revenue.” Steinke emphasized modularity as a hedge against an uncertain future: as AI and agentic applications emerge, the NFL’s platform can flexibly support new requirements without wholesale redesign.
Perhaps most importantly, Steinke highlighted the human dimension. A platform’s sophistication is irrelevant if stakeholders cannot access, trust, and act on the data. Stakeholder adoption is the ultimate KPI for the NFL, ensuring the platform accelerates outcomes for fans, teams, and the league.
HighFens: Bridging Data Maturity Gaps Across Enterprises
Frederic Van Haren, CTO of HighFens, offers a consulting and solutions-provider perspective. His firm helps organizations at varying maturity levels navigate the complexities of AI, training, and inference.
Van Haren stressed that storing data is not the same as using it. Many enterprises collect petabytes without extracting value, like buying books without reading them. Instead, platforms must enable continuous processing and lifecycle management.
HighFens’ work often reveals stark contrasts between teams focused on GPUs and compute versus those focused on data. Successful AI solutions require marrying the two, with attention to scalability, cost optimization, and hybrid architectures that combine file, object, and NVMe storage.
One of Frederic’s most compelling insights is the inertia of data gravity. Enterprises often hold on to aging storage assets because migrating petabytes is difficult and expensive. This leads to heterogeneous environments spanning generations of hardware and software. Solutions like Apache Iceberg, which reduce blocking and allow millions of events per second, are essential for stitching these environments together.
He also points out that silos are as much cultural as technical. Different groups often operate independently, perpetuating fragmented data landscapes. Overcoming this requires new tools and governance, cataloging, and, increasingly, agentic AI workflows that can automate some integration tasks.
Hammerspace: Global Namespace-based Data Platform
Hammerspace positions itself as a unifying layer across heterogeneous storage systems, tackling the challenge of data silos head-on. Sam Newnam described their approach as helping organizations shift from “storage” to data platform thinking, emphasizing file-level granularity across protocols, providers, and geographies.
Key pillars of Hammerspace’s strategy include:
- Simplification through a global namespace, making distributed data visible and accessible.
- Automation of data movement and placement, ensuring right-tier, right-time availability.
- Acceleration of AI pipelines by staging NVMe close to GPUs and seamlessly returning data post-use.
Uniquely, Hammerspace embraces open standards at the kernel level, eliminating the need for custom clients and proxies and ensuring compatibility across Linux environments. This reduces operational complexity for platform engineers who wear multiple hats.
From a governance standpoint, Hammerspace advocates a single access point with strong auditability. This enables consistent enforcement of security and compliance policies while still supporting hybrid and multi-cloud deployments. Newnam noted that 80% of customers have hybrid strategies, making seamless namespace stitching a competitive differentiator.
Outcomes are measured in time-to-insight acceleration, lower TCO, and reduced data migration costs. By virtualizing data across environments, Hammerspace is an “AI factory enabler” without forcing forklift upgrades.
HPE: Object Storage Reborn with Data Intelligence
HPE views the intersection of storage and AI as transformative. Ed Beauvais, Director of Product Management, argued that object storage has been reborn in the AI era. The familiar S3 protocol remains, but performance, scale, and intelligence requirements have changed.
HPE’s X10000 platform exemplifies this shift. It integrates inline data intelligence, processing metadata, sampling unstructured inputs, and running lightweight AI models at ingestion. By exposing results in open-table formats like Iceberg or as Parquet-formatted file objects, HPE ensures customers can use their preferred query engines without lock-in.
This architecture directly addresses the need for retrieval-ready data. Metadata enrichment, vector embedding extraction, and RAG (retrieval-augmented generation) pipelines are supported natively. Security is also a design principle, with IAM integrated into embedding pipelines to prevent unauthorized LLM access.
Beauvais emphasized that modern storage must be more than persistence. It must accelerate “time to first inference,” enabling GPUs to stay productive. HPE leverages disaggregated hardware via Alletra MP, unified by a single management plane, and delivered via GreenLake for flexible consumption.
Ultimately, HPE envisions storage as a pipeline, not just a bucket. Embedding protocols like MCP (Model Context Protocol) ensures that AI agents can interact with data contextually, marking a step toward agentic AI-native infrastructure.
Dell Technologies: Composable Engines and Unified Control Planes
Geeta Vaghela and Vrashank Jain share Dell Technologies’ perspective, which highlights the convergence of unstructured data growth, retrieval-ready requirements, and built-in governance.
Dell identifies three core trends:
- Explosion of unstructured and multimodal data – video, sensor, and document inputs must be processed at scale.
- Retrieval-ready by design – data must be indexed, enriched, and queryable upon ingestion, not as an afterthought.
- Governance by default – compliance, lineage, and security integrated into the platform from the start.
To meet these demands, Dell has architected an AI data platform built on composable engines. Storage engines like PowerScale and ObjectScale provide resilient, high-performance foundations validated with NVIDIA. Data engines integrate best-in-breed technologies like Starburst for federated queries, Elastic for vector search, and Spark for processing.
Crucially, Dell’s approach is open and modular. The platform avoids vendor lock-in by embracing Iceberg and APIs while enabling customers to combine the needed engines. This openness positions Dell as a counterweight to rigid, monolithic stacks.
Looking forward, Dell anticipates three shifts:
- From separate stacks (analytics, AI, business apps) to unified platforms with common control planes.
- From static data access to intelligent orchestration powered by agentic AI.
- From infrastructure-centric management to coordinated, real-time platform management across SQL, Spark, AI, and vector systems.
In Dell’s view, the data platform is evolving into the enterprise control plane for intelligence, where human decision-makers and AI agents work side by side.
OurANGLE: The Market’s Trajectory
A single vendor or architecture does not define the future of data platforms, but rather the layered integration of storage, data services, data management, metadata, and governance. End-users demand transparency, modularity, and stakeholder-centric outcomes. Vendors are responding with composable, open, and intelligent platforms that shift value from infrastructure to business results.
Taken together, these perspectives reveal a data platform market in rapid transition.
End-users like FICO and the NFL emphasize outcomes: transparency, modularity, governance, and stakeholder adoption. For them, data platforms are not about technology for its own sake, but about enabling decision-making at speed and scale.
Service providers like HighFens underscore the reality that enterprises are at different maturity levels, wrestling with silos, inertia, and data gravity. AI training and inference amplify these challenges, demanding holistic solutions that span hardware, software, and governance.
Vendors like Hammerspace, HPE, and Dell are responding by pushing intelligence into the storage and data services layers, embracing open standards, and positioning platforms as pipelines that accelerate AI-readiness.
Themes for the next year … at least!
The Storage Layer is Becoming Active
Gone are the days when storage merely wrote and retrieved. Inline intelligence, metadata enrichment, and vectorization are being embedded at ingestion. Storage is now the first step in the AI pipeline.
Open Table Formats are the De Facto Standard
Iceberg, Delta, and Hudi have become the lingua franca for interoperability. Vendors recognize that lock-in is untenable; customers demand flexibility and future-proofing.
Metadata and Governance are the Linchpins
As FICO and the NFL illustrate, trust in data requires governance, lineage, and semantic consistency. Without these, AI adoption stalls. Governance is no longer a bolt-on; it must be built in.
Platforms are Moving Toward Unified Control Planes
The vision of a composable yet unified backbone reflects a broader industry trend. Enterprises cannot indefinitely manage separate systems for analytics, AI, and business applications. Convergence is inevitable.
Agentic AI Will Stress-Test Platforms
The move toward agentic AI, where autonomous agents interact with data, applications, and each other, will expose weaknesses in platforms that lack retrieval-ready data, strong security, or unified orchestration. The winners will be those who embed retrieval, reasoning, and action directly into their stacks.
As agentic AI accelerates, the stakes will rise. Data platforms must evolve into the control planes of enterprise intelligence, where humans and AI systems operate seamlessly on trusted, governed data. In this future, the differentiator is not just storing or querying faster; it is enabling organizations to confidently act, decide, and innovate on AI-ready and business-ready data.
We will have more to come in the near future. Stay tuned!
Here are the videos if you missed any of the sessions: