Formerly known as Wikibon

214 Breaking Analysis | Unifying intelligence in the age of data apps

With George Gilbert

We believe the future of intelligent data apps will enable virtually all organizations to operate a platform that orchestrates an ecosystem similar to that of Amazon.com. By this we mean dynamically connecting and digitally representing an enterprise’s operations including its customers, partners, suppliers and even competitors. This vision includes the ability to rationalize top down plans with bottom up activities across the many dimensions of a business – e.g. demand, product availability, production capacity, geographies, etc. Unlike today’s data platforms, which generally are based on historical systems of truth, we envision a prescriptive model of a business’ operations enabled by an emerging layer that unifies the intelligence trapped within today’s application silos. 

In this Breaking Analysis, we explore in depth, the semantic layer we’ve been discussing since early last year. To do so we welcome Molham Aref, the CEO ofRelationalAI.

Data and AI are Changing Applications and Business Operations

In the above graphic we attempt to depict some of the dimensions of the shifts taking place in customer environments, including:

  • The shift in emphasis from application-centered world to one that is data-centric, where logic is embedded in the data and not locked up in application silos.
  • For decades, software has been deployed to automate processes. In the age of AI we’re automating decisions, and AI is taking actions.
  • To enable this, we must unify the so-called metadata model that will enable:
  • Moving from a world of historic systems of truth to a real-time model of an organization.

To get us where we are today, we had to separate compute from storage to take advantage of cloud scale. We believe a new technology layer is needed to capture all the intelligence that has been locked inside of application silos for years. Our research indicates that by doing so, organizations can coherently work with that shared data across the enterprise and we use Amazon’s retail business as an example of the desired outcome. 

This capability to coherently share data across the enterprise is enabled by what the industry refers to as the semantic layer.

What is a Semantic Layer and Why Does it Matter?

We asked Molham to explain his view of what exactly is a semantic layer; and how is it broader than the semantic layers that define metrics and dimensions for BI tools from firms like dbt, AtScale, LookML, etc.?

Molham highlights the evolution of data management and the emergence of semantic layers in modern data stacks. As well, Molham discusses the challenges and solutions associated with integrating disparate data sources and the future direction of semantic technologies.

Key Points

  • Data Clouds and Modern Data Stacks: Recognition of the shift toward centralized data repositories like data clouds (e.g., Snowflake, Salesforce Data Cloud) which solve the issue of data accessibility but continue to create data fragmentation because customers typically have more than one.
  • Semantic Layer Emergence: There is a need for a semantic layer to unify disparate data meanings across various databases, enabling a common business language. So far, these semantics have been focused on diagnostic analytics.
  • Challenges in Application Logic: There are still issues with recreating application logic (e.g., gross margin calculations or other common metrics) in various BI tools, leading to inconsistent definitions. In other words, the meaning of margin in one corpus could be different from another requiring a rationalization of disparate data sets.
  • Categories of Semantic Layers:
    1. BI-centric, SQL-centric:  Using technologies like dbt, LookML, AtScale for defining metrics and dimensions and making them accessible and reusable.
    2. Semantic Web Standards: A smaller community focusing on richer semantic application standards like OWL, SPARQL, SHACL (Shapes Constraint Language), emphasizing standards over DSLs (domain-specific languages).
    3. Modeling Languages:  Notable examples here include Legend from Goldman Sachs, allowing data interaction through semantic layers and supporting various query languages; and Morphir from Morgan Stanely.
  • Enterprise Semantic Layers: Notable developments in enterprise applications (e.g., Blue Yonder, SAP) and rumors of Salesforce developing a semantic layer, mostly focusing on backward-looking processes and analytics.
  • Future of Semantic Layers: Emphasizing the integration of intelligent semantics for predictive and prescriptive analytics, moving beyond descriptive analytics.

Aref emphasizes the critical role of semantic layers in modern data management, outlining the transition from fragmented data systems to unified, intelligent platforms. He underscores the importance of semantic layers in bridging the gap between disparate data sources and application logic, paving the way for advanced analytics with predictive and prescriptive modeling.

The future of semantic layers lies in their ability to evolve from mere descriptive analytics to incorporating intelligent semantics for predictive and prescriptive insights, thereby transforming data management into a more dynamic and foresighted field.  

Why do We Need a New Technology Layer?

We asked Molham to explain the technology that needs to be created to unify our forward-looking vision – i.e. enabling the ability to describe prescriptively all the activities for all the entities in a business. 

Aref delves deeper into the intricacies of semantic layers, particularly focusing on the need for new technologies to encapsulate intelligent semantics and the distinction between code-based and model-based semantic capture.

Key Points

  • Necessity for New Technology: Current semantic layers inadequately address intelligent semantics, often relying on procedural systems like Python code.
  • Code vs. Model-Based Semantics: Challenges with code-based semantics include limited shareability, incomprehensibility to business users, and technology dependency, hindering portability.
  • Value of Declarative, Technology-Independent Semantics: The importance of capturing semantics in a technology-independent manner, preferably declaratively.
  • Relational Knowledge Graph as the Ideal Abstraction: Advocates for a relational knowledge graph compatible with modern data stacks and data clouds based on the relational paradigm.
  • Richer Knowledge Types: The potential of relational knowledge graphs to encompass various knowledge types, including statistical, probabilistic, deterministic, and symbolic.
  • Advanced Analytical Capabilities: Enabling advanced analytics like graph analytics, rule-based reasoning, prescriptive analytics (e.g., integer programming), predictive analytics (e.g., time series forecasting, simulation, probabilistic programming, graph neural networks).

Aref’s vision for the future of semantic layers is centered around a relational knowledge graph, which he sees as crucial for integrating more complex and intelligent semantics. This approach promises to overcome the limitations of current semantic technologies, fostering a more versatile and insightful data management ecosystem.

As we’ve discussed in previous Breaking Analysis episodes, knowledge graphs allow for expressiveness of data semantics but are cumbersome to query. Enabling the simplicity of SQL’s declarative queries for knowledge graphs will broaden their appeal in our view and open more use cases beyond today’s more narrow applications (e.g. cybersecurity).

The key to unlocking the full potential of semantic layers lies in the shift from traditional code-based semantics to relational knowledge graphs, enabling a vast array of analytical capabilities and ushering in a new era of intelligent data management.  

Today’s Brute Force Approach is Limited

Data platforms today get us only part way to our vision of embedded and accessible intelligence. Third party tools still introduce much fragmentation within each platform.

This dramatic graphic above shows Jean-Claude Van Damme straddling two 18-wheelers. The trucks represent today’s data platforms which are mostly SQL DBMS based. The picture on the left side implies that data platforms without rich semantics are pretty easy to straddle. However, as the platforms incorporate more of the application semantics it becomes much harder to straddle them, represented on the right side of the graphic.

Thinking about the major modern data platforms today, we see Databricks with Unity  starting to add semantics to its platform. Snowflake is likely going to follow suit by building its own extended metadata catalog. AWS’ DataZone perhaps gives us clues as to its direction. Google’s Dataplex appears to be going down this path. Microsoft Fabric and the Power platform are the likely path for that company. Oracle will try to attack this from its very DBMS-centric point of view – we’ll see how this all plays out but these are the clues we’re watching. VAST Data has gone as far as dissolving much of the distinction between data and metadata.

There is no Clear Tools Stack to Solve for the Next Data Platform

Today we have a collection of bespoke tools spanning governance, security, metrics, data quality, observability and transformation that can be cobbled together.

The above graphic is from ETR’s Emerging Technology Survey (ETS). It’s a survey of more than 1,500 IT decision makers focused on which emerging tech platforms they’re using. These survey only captures non-public emerging companies. 

The graphic shows selectively some of the tooling that is representative of the supporting data ecosystem and gets us partway to our vision of the future. The Y axis shows Net Sentiment, which is a measure of intent to engage. The X axis is Mindshare, which represents how well-known a company is to these customers.

Grafana stands out a bit. You see DBT and Fivetran are prominent, as is Collibra. But there are many choices that organizations have requiring them to stitch together different elements of the stack. There doesn’t appear to be a LAMP stack or ELK stack equivalent. 

The Salesforce Data Cloud and Palantir platforms are somewhat instructive with respect to the future. Earlier this month, theCUBE Research talked to the EVP of the Salesforce Data Coud. They’re uniquely up-leveling today’s data platforms by creating a metadata-driven set of semantics that also borrows the application semantics from the Salesforce operational apps. The relevance of this is that setting up a pipeline to ingest, transform, and unify all the data becomes a configuration problem, not a code problem.

Palantir takes this somewhat further because they can model entities that represent the rest of the business.

But these are still both walled gardens.

Can a Platform Like RelationalAI Solve Semantics Across Multiple Data Platforms?

We want to understand from Molham, for customers that are fully invested in the prominent data platforms, could Relational AI become a platform that hosts, simplifies, enhances and even unifies this cobbling of tools that is an attempt to add coherent semantics? And if so, how does the company think about solving this problem? 

Aref elaborates on the approach of his company, emphasizing the principle of meeting customers’ existing infrastructures and needs, particularly focusing on integrating with data clouds like Snowflake and supporting advanced analytics capabilities.

Key Points

  • Market Approach: Meeting Clients Where They Are: Emphasizing the importance of adapting to existing client ecosystems, starting with integration into platforms like Snowflake.
  • Integration with Data Clouds: Highlighting the company’s strategy of running within the security perimeter of data clouds, leveraging existing governance frameworks.
  • Architectural Compatibility: Ensuring architectural alignment with systems like Snowflake to reduce friction traditionally introduced in data management processes like data copying, synchronization, and re-governing.
  • Relational Paradigm Emphasis: Advocating for the relational paradigm in data management, as opposed to paradigms designed for coding and application development.
  • Knowledge Graph as Semantic Layer Target: Positioning their knowledge graph to interface with other semantic layers, like Goldman Sachs’ Legend, and emphasizing the importance of enabling unique, enterprise-owned semantic models.
  • Unified Business Model Vision: Stressing the need for a unified, business-centric model that is portable and not vendor-specific, allowing for compilation down to various target systems.

Molham Aref puts forth a vision of seamlessly integrating advanced data analytics into existing data ecosystems. He advocates for a unified approach to semantic layers, focusing on relational paradigms and enterprise-specific models that enhance the value and efficiency of data management across various platforms.

In the evolving landscape of data management, the key to success lies in embracing a unified, enterprise-centric model, seamlessly integrating with existing data ecosystems and championing the relational paradigm for enhanced efficiency and value.  

Complementary or Disruptive to the Modern Data Stack of Today?

The idea of the above graphic is the foundational value has been in the analytic DBMS. We’re now layering additional value on metadata-based tooling (e.g. Salesforce Data Cloud or Palantir). The third stage is integrating all the intelligence that was trapped in application silos. Not just the application logic but also the analytics that enables a self-driving, continuous learning model of a business. 

Clearly Aref sees RelationalAI as a complement and not a competitor to platforms such as Snowflake, a key partner. While we are aligned with his vision we found his answer to be diplomatic and wanted to push a bit on why he feels this is complementary and not disruptive.

His answer focuses on the complexity of building large-scale data systems like Snowflake and BigQuery, but emphasized the importance of technological independence in semantic layers and drawing an analogy with NVIDIA’s role relative to the CPUs. We found this both powerful and perhaps a validation of the nature of our question, as NVIDIA is most definitely disruptive.

Regardless, here’s a summary of how Molham views this issue:

Key Points

  • Complexity of Building Data Systems: Acknowledges the difficulty in creating sophisticated data platforms like Snowflake, BigQuery, and Microsoft Fabric.
  • Diverse Enterprise Needs: Notes that enterprises often use multiple data systems and do not want to be restricted to a single vendor.
  • Vendor Independence in Semantics: Advocates for vendor-independent semantic layers to avoid locking in with specific compute providers.
  • NVIDIA Analogy: Compares the role of his company to NVIDIA’s in the CPU industry, suggesting a complementary, specialized approach to enhancing data systems.
  • Role of Co-Processors: Highlights the importance of co-processors in providing efficient, specialized capabilities (like intelligence and visualization) within existing systems.
  • Future of Data Clouds: Envisions data clouds evolving beyond mere databases, potentially becoming central platforms for application development, similar to Oracle in the 1990s.

Aref underscores the necessity for semantic layers to remain technologically independent, advocating for a co-processor approach akin to NVIDIA’s relationship with CPU manufacturers. This strategy aims to enrich data clouds, transforming them into versatile platforms for a wide range of applications.

The future of data management hinges on the creation of technologically independent semantic layers, much like NVIDIA’s role in the CPU industry, paving the way for versatile, co-processor-enhanced data clouds that cater to the diverse and evolving needs of enterprises. 

The Role of LLMs in Solving for Data Coherence

We’ve had decades of challenges building top-down enterprise models. Custom-built enterprise data models gave us packaged apps such as SAP, Oracle, NetSuite, and Salesforce. Enterprise data warehouses (EDWs) bred data marts. Even with today’s BI it’s been challenging to get widely adopted shared semantics. Organizations have these bottom-up metrics (e.g. bookings, billings, revenue, etc.). And there are top-down dimensions like the organizational hierarchy. Rationalizing all this complexity has created markets for AtScale, dbt, and others.

We asked Molham for his thoughts on the role of LLMs in addressing these challenges.

His response explores the intersection of Large Language Models (LLMs) and knowledge graphs in the context of building semantic layers. He elaborates on the synergy between these technologies in simplifying and enhancing data management and semantic understanding within enterprises.

Key Points

  • Impact of LLMs on Semantic Layers: Emphasizes the significant contribution of LLMs in constructing semantic layers, particularly in mining knowledge from diverse enterprise sources.
  • Symbiotic Relationship of LLMs and Knowledge Graphs: LLMs facilitate the creation of knowledge graphs by extracting information from applications, documents, and images, making semantic layer construction more cost-effective and adaptable.
  • Enhanced Efficacy of Language Models: Describes how a well-structured semantic layer or knowledge graph improves the effectiveness of language models in interpreting and navigating complex data.
  • Navigating Complex Data Silos: The semantic layer simplifies the vast array of enterprise data (potentially millions of columns) into a manageable number of concepts.
  • Simplification to Core Concepts: Even the most complex enterprises can be modeled using a few hundred key concepts, despite having millions of data columns.
  • Role of Language Models in Semantic Relationships: Language models assist in identifying and naming relationships between concepts, contributing to the evolution and maintenance of the semantic layer.
  • Decomposing Business Models: Highlights the need for processes that decompose business into functional sub-models using a standard set of enterprise concepts.

Aref underscores the transformative role of LLMs and knowledge graphs in revolutionizing semantic layers. He envisions a streamlined approach where complex data ecosystems are distilled into a few hundred core concepts, significantly simplifying data management and analysis for enterprises.

The fusion of Large Language Models and knowledge graphs heralds a new era in semantic layer development, transforming the labyrinth of enterprise data into a navigable landscape of a few hundred key concepts, redefining efficiency and clarity in business modeling. 

Blue Yonder as an Example of the Potential of Tapping Metadata Intelligence

One of the goals in theCUBE Research is trying to understand the shifting value flow. In other words, where historically the center of gravity has been the DBMS (e.g. Oracle, Snowflake, etc.), will the metadata and intelligence that defines the business entities becoming increasingly valuable and how will that impact architectures, customer choice and vendor competition. We see early examples of the Salesforce Data Cloud and Palantir attempting to provide amodel of the business with varying levels of intelligence. We see this future state as increasingly compelling for organizations.

Blue Yonder is another example where the company is reimagining supply chain and logistics ecosystems, rebuilding its applications on top of RelationalAI on top of Snowflake. We see three evolutionary stages becoming more clear: i.e. moving from a world that is DBMS-centric to metadata-centric to an intelligent model of the business.

We want to understand how Molham sees this evolution playing out and we used Blue Yonder and the very complex supply chain and logistics example as a guide. Blue Yonder is a company run by Duncan Angove (former Oracle, Infor, etc.) with the legacy assets of firms like Manugistics, JDA Software, i2 and others that they’re reimagining in an AI powered world.

Molham Aref discusses the strategic shift of companies like Blue Yonder towards data-centric platforms like Snowflake and the transformative impact of translating traditional code-based business logic into knowledge graph-based systems.

Key Points

  • Blue Yonder’s Strategic Shift: Acknowledges Blue Yonder’s effective transition to Snowflake, benefiting their large enterprise customers by consolidating data spread across multiple databases.
  • Application-centric to Data-centric Transition: Highlights the move from application-centric systems to data-centric platforms, emphasizing Blue Yonder’s adaptation to industry trends.
  • Code-based to Relational Semantics: Discusses the transition from traditional code-based semantics to relational, knowledge graph-based semantics.
  • Importance in Supply Chain Networks: Stresses the critical role of predictive and prescriptive analytics in supply chain management, where decisions are driven by sophisticated analytics.
  • Efficiency Gains from Knowledge Graphs: Illustrates efficiency improvements, citing an example of replacing more than a hundred thousand of lines of C++ code with several thousand lines of relational rules in a knowledge graph.
  • Cost and Quality Benefits: Points out the significant reductions in development and testing costs, and improvements in software quality and adaptability.
  • Future Relevance for Application Companies: Urges application companies to embrace data-centric approaches and leverage their intellectual property in knowledge graph-based systems for increased relevance and efficiency.

Summary Statement

Molham Aref emphasizes the paradigm shift from application-centric to data-centric approaches in enterprise software, illustrating how companies like Blue Yonder are pioneering this transition. He highlights the significant benefits of adopting knowledge graph-based systems, both in terms of operational efficiency and strategic alignment with current industry trends.

Pull Quote

“The transformation from code-based to knowledge graph-based semantics in enterprise applications is not just a technological shift; it’s a strategic imperative that significantly reduces complexity and cost, marking a new era of data-centric efficiency and intelligence in business operations.”

Unifying Disparate Analytic Stacks

We ask Molham to elaborate on the following premise. In the past we’ve had separate stacks for diagnostic analytics as well as those supporting predictive, prescriptive, planning, simulation and optimization efforts. We wanted to understand what customers can do when all of those stacks are integrated and that becomes one coherent model. We asked again about the Blue Yonder case example for some use cases and impacts. 

Aref delves into the complexities of analytics in the supply chain sector and other industries, highlighting the transformative impact of integrating sophisticated analytics directly into data clouds like Snowflake.

Key Points

  • Rich Analytics in Various Industries: Acknowledges the widespread use of advanced analytics in industries like supply chain, financial services, telecommunications, retail, and consumer packaged goods (CPG).
  • Challenges with Traditional Approaches: Describes the inefficiencies of traditional methods, where data had to be extracted from data clouds for specialized analytics, leading to security, governance, and synchronization issues.
  • Co-Processor Model Benefits: Explains how integrating support for advanced workloads (like graph analytics and rule-based reasoning) directly into data clouds eliminates the need for external processing and reduces complexity.
  • Seamless Integration with Data Clouds: Envisions a scenario where data clouds natively support sophisticated analytics, enhancing their capabilities.
  • Operational Efficiency and Speed: Emphasizes the operational benefits, such as reduced need for synchronization across systems and faster model evolution.
  • Transformative Impact on Business Evolution: Highlights how this approach allows businesses to adapt and evolve more rapidly, unhampered by having to change each type of analytic model to capture business changes..

Molham passionately discusses the significant benefits of embedding sophisticated analytics within data clouds, which he sees as a game-changer for industries reliant on complex data analysis. This integration promises to streamline operations, enhance security and governance, and accelerate business evolution.

Embedding advanced analytics directly into data clouds revolutionizes how industries operate, offering a more seamless, secure and efficient approach that accelerates business evolution and eradicates the cumbersome fragmentation of traditional data analysis methods.

 Creating an Amazon.com-Like Digital Twin of an Ecosystem

 We asked Aref to react to our Amazon.com example where its operations are a closed learning loop, elaborating on the concept of a digital twin and the possibility of creating self-driving businesses through advanced semantic technologies.

Key Points

  • Amazon as an Example: Discusses Amazon’s highly automated operations, where planning and operations are in a continuous, self-improving loop.
  • Digital Twin Concept: Introduces the idea of a digital twin of a business, where the semantic layer captures business operations with sufficient richness to enable effective data feedback loops that learn from experience.
  • Resource Allocation Optimization: Highlights the ability to optimally allocate resources like capital, labor, inventory, and equipment through this model.
  • Challenges in a Siloed Tech Environment: Points out the limitations of disconnected and siloed technology in achieving holistic optimization and feedback.
  • Holistic Optimization Across Business Functions: Stresses the importance of unified technology platforms for achieving optimization in various business aspects like pricing, replenishment, and logistics.
  • Vision of Self-Driving Businesses: Envisions the future of businesses as self-driving entities, relying on platforms that support a general-purpose type of business intelligence that can bring all the different types of analytics to bear.

Aref envisions a future where businesses operate like self-driving entities, continuously learning and optimizing operations through the use of digital twins and unified semantic layers. This approach promises to transform how businesses allocate resources and make decisions, moving away from the inefficiencies of siloed systems.

The concept of self-driving businesses, powered by digital twins and advanced semantic technologies, marks a revolutionary shift in resource allocation and decision-making, transcending the limitations of traditional, siloed business models.

 The Role or Retrieval-Augmented Generation (RAG)

Many folks including us at theCUBE Research are excited about RAG. We’ve built our own RAG with theCUBEAI and want to understand what role RAG plays in creating this coherent metadata-based model. Will it be a contributor, is it a stepping stone or will RAG disappear? 

Aref sees the RAG technique as a crucial method for enhancing language models by integrating them with deterministic, symbolic, and data assets.

Key Points

  • RAG in Language Models: Describes RAG as an essential technique for interfacing language models with other data and knowledge assets, enhancing their effectiveness.
  • Language Models and External Tools: Compares language models to humans, who are more effective when they have access to additional tools like libraries and calculators.
  • RAG’s Broad Application: Notes that RAG encompasses a range of techniques for connecting language models to both deterministic assets (APIs) and vector-based semantics (LLM-encoded knowledge for retrieval).
  • Enhancing Language Models with Business Concepts:  Augmenting RAG with business concepts and relationships captured in relational knowledge graphs, enhancing their understanding and capabilities.
  • Beyond Vector Retrieval: Points out that RAG’s utility extends beyond retrieving vector data, encompassing a wider range of knowledge types.
  • Investment in Combining Technologies: Highlights that many large enterprises are actively exploring how to integrate RAG with existing AI and data technologies to create more powerful systems.

Molham emphasizes the significant role of RAG in advancing language models, suggesting  the value of relational knowledge graphs in teaching models about complex business concepts and relationships. He envisions a future where the integration of RAG with traditional AI and data via relational knowledge graph technologies leads to more robust and intelligent business systems.

The integration of Retrieval-Augmented Generation with language models is akin to giving them a library and tolls such as a calculator, vastly expanding their understanding and capabilities in interpreting and navigating the complex world of business data and relationships.

A Future Forward Look at an End-to-End Model of an Organization 

We’ve been talking about the future. So let’s explore more deeply what that looks like. Specifically, what’s possible when applications can represent and end-to-end prescriptive model of the business. A system that tells you what should happen or what you should do versus what did happen.

The graphic below, created by George Gilbert, uses Amazon.com as the metaphor for the future. The difference is that this vision is enabled by technology that scales horizontally and is available to most organizations. We’ve talked about “Uber for all.” Here we’re talking about Amazon.com for all. 

The graphic describes in more detail what Molham was starting to talk about with the Blue Yonder scenarios. But in the past, our enterprise applications were operational in nature mainly. They tracked what happened in the past. The big advances over decades were trying to integrate processes across functions and divisions and even globally.  What we were talking about in the future is that not only are we integrating the processes, but we’re integrating predictive and prescriptive models along with the planning, simulation and optimization that might inform those models so that it works across functions.

The technical term is “fan out” so that you can look at it from different angles even if you didn’t originally forecast or plan from that angle. In other words, the model fills that out.

Legacy packaged applications did so much work to integrate all these processes. We wanted to understand Molham’s point of view on where they fit in a world where you start layering a prescriptive model on top of them and they become part of this bigger model. 

Below, we summarize Aref’s thoughts on the gradual transition from traditional application-centric models to more data-centric architectures, acknowledging the challenges and opportunities in this shift.

Key Points

  • Gradual Transition of Applications: Recognizes that rewriting applications to be more data-centric is a gradual process, with some applications potentially never fully transitioning.
  • Service-Oriented Semantics: Suggests that certain applications may retain their code-based semantics, functioning as services to answer specific queries.
  • Migration to Data Clouds: Describes the ongoing process among enterprises and application companies to migrate their operations into data clouds, using tools like containers.
  • Silicon Valley’s Focus on Next-Generation Apps: Highlights the investment in Silicon Valley in developing next-generation app companies specifically addressing the need for data-centric architectures with easy-to-modify declarative business logic.
  • Limitations of Current Business User Access: Points out the complexity business users face with traditional applications, leading them to customize access to data  with less efficient methods like downloading data into Excel.
  • Future of App Development: Envisions a new wave of app development focused on data-centric architectures with declarative logic and AI, aiming to make business semantics more accessible and actionable.
  • Refactoring and Unifying Semantics: Emphasizes the long-term goal of refactoring existing applications to align with new data-centric models and declarative semantics in relational knowledge graphs.

Molham sheds light on the evolving landscape of application development, moving from traditional, code-embedded semantics to more accessible, data-centric, declarative models. He highlights the strategic investments being made in Silicon Valley and elsewhere to address these challenges, pointing towards a future where analytics and business semantics are more integrated and user-friendly.

The transition to data-centric architectures in application development marks a significant shift in the industry, aiming to transform business semantics from obscure code to accessible, relational models that empower business users with more autonomy and analytical power.

While this transition won’t happen overnight, the bubbling up of trends that we’ve been highlighting including what we call the sixth data platform, data mesh, data fabric and the end-to-end intelligent enterprise all underscore changing customer needs. In this future vision elements of a business are represented digitally and in near real time.

It won’t happen tomorrow and will evolve over a decade or more. As well, like flying a plane on instrument flight rules, customers will have to gain confidence in these systems, not only rationalizing non-intuitive recommendations but trusting that governance and privacy are integral to the system. Regardless of the challenges, the business value impact of unifying intelligence across disparate systems will be enormous.

Do you agree?

How do you see the future of intelligent data apps evolving? How long will it take? What are the missing pieces and which companies are best positioned to deliver?

Keep in Touch

Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out. And to Rob Hof, our EiC at SiliconANGLE.

Remember we publish each week on Wikibon and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com | DM @dvellante on Twitter | Comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.

Watch the full video analysis:

Image:  Yingyaipumi

Note: ETR is a separate company from theCUBE Research and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai or research@siliconangle.com.

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

 

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content