Formerly known as Wikibon

Big Data and ML Predictions 2018

Premise

Big data everything — technology, products, markets, applications, and especially expectations — is being reset. 2018 will be the year that big data finally is embedded into business fabrics.

The era of science experiments in big data is over. Through painful trial and failure, enterprises now have the experience required to begin extracting significant business value from big data-related technologies. While horizontal approaches (e.g., data lakes and bespoke analytic pipelines) will remain important, value from big data technology will increasingly be packaged in more developer-friendly online services accessible through APIs and applications.

Our overarching big data prediction is that big data will emerge as a facilitator and feature of all digital business initiatives.

Capture New Sources of Data

Digital business is about using data differentially to create and keep customers. That imperative means businesses have to capture data from new sources in addition to web and mobile applications, starting with the internet of things and the infrastructure supporting these extended applications. This new data is very different from the business transactions that systems of record captured.

  • Streaming Data Will Join Batch and Request/Response as a Mainstream. These new data sources come mostly from machines, even when originating in web and mobile interactions. The data no longer comes from end-user forms-based data entry. What’s more, much of the data flows continuously, typically not in response to user input or periodic batch processes.

Create New Value From Data

Once businesses have captured the new data, digital businesses face new demands in creating value from that data. Enterprise applications become more of a starting point for joint development between vendor and customer.

  • The Focus of Machine Learning Will Shift From Tools to Developer-Ready APIs. Tech-centric companies own much of the pool of data science skills. Mainstream enterprises will realize that building bespoke machine learning models is currently beyond their reach. Instead, these enterprises will focus on building applications based on APIs to pre-trained models offered by tech’s lighthouse vendors.
  • The Variations of Open Source Business Models Narrow Based on Go-to-Market Limitations. The business models supporting the software tools and platforms value chain remain unproven.  The industry still hasn’t produced another break-out success based on an open-source business model like Red Hat. Enterprise software still requires a high-cost, high-touch direct sales force to drive wide deployment. Open source has deflated pricing but few emerging vendors have a sufficiently broad product line that can support a direct sales force.

Enact Based on Data

Systems of Agency use machine learning to automate or inform decisions that can drive new business outcomes. Systems of Agency build on legacy Systems of Record, which are about efficiency because they capture and process standardized business transactions. Building and selling the new applications, however, will look very different from the traditional ones. The supporting infrastructure will be radically different, as well.

  • Packaged Enterprise Applications Are No Longer Fully Packaged. On the R&D side, Systems of Agency will look more like semi-custom joint development between vendors and customers. On the go-to-market side, vendors will increasingly sell applications in conjunction with their own customers who are supply chain anchors. This cooperation will combine the technical and domain expertise for these highly specialized solutions.
  • Machine Learning Enables a New Generation of ITOps and APM to Support Always-On SLAs. Web and mobile applications, in addition to Systems of Agency, can’t go down. Nor can any of these applications rely on the traditionally labor-intensive management process. Those two requirements alone will drive IT Operations and Application Performance Management to incorporate pervasive machine learning. This management infrastructure will become the first horizontal application of machine learning.

Wikibon’s Big Data Predictions for 2018

Wikibon

Prediction

In 2018, more than 50% of the leading-edge consumer technology firms will adopt streaming data backplanes as the single source of truth across their applications. Streaming data will join batch and request/response programming as the third mainstream programming model.

Driven by ever-increasing application sprawl and the rise of continuous data flowing from both edge devices, end-user interactions, and the infrastructure supporting these devices and applications, leading-edge enterprises can no longer maintain the “ground truth” data in centralized DBMSs. Too many applications beyond traditional systems of record need to capture and analyze data that augments what the central DBMS is already struggling to manage. For several decades, enterprises have been papering over the problem of application sprawl with ever more point to point application integration, complex ETL pipelines, and a multitude of poorly integrated operational data stores and data marts. What makes stream processing timely for the leading-edge technology firms is the programming model. With rapidly maturing products such as Spark, Kafka, and Flink, developers can now process and analyze streams of data almost exactly the way they process and analyze batches of data. These same developers can also more easily support the rapidly proliferating microservices that communicate via asynchronous streams. The more traditional request/response communication via RPC can’t support lots of ephemeral microservices. The microservices would bog down waiting for responses to their requests.

Wikibon

Prediction

Machine learning peaked even higher on the hype cycle than big data. In 2018 leading-edge non-tech enterprises will hit a wall with machine learning. These enterprises will realize that the scarcity of data science talent means they can’t build machine learning applications from scratch on their own. Rather, they will shift from building models to leveraging public cloud vendors’ pre-trained models via developer-ready APIs.

There are not enough data scientists to support all the machine learning development activity in mainstream enterprises. All the focus on tools that democratize development has taken the focus away from the work public cloud vendors have been doing to enable orders of magnitude more developers to use the technology. Public cloud vendors have accumulated huge amounts of data from their online consumer businesses. The vendors have used this data to train models in speech recognition, natural language processing, vision and image recognition, and similar functionality. Mainstream enterprises will increasingly focus on using this technology to power next generation user interfaces and even core functionality for new classes of applications. In the meantime, machine learning tools will get more accessible to mainstream developers, but that will happen over a period of at least several years.

Wikibon

Prediction

Vendors of open source software are going to start consolidating through mergers and roll-ups in 2018.

Hundreds of software companies have touted open source as the ideal business model, citing Red Hat as the example for success. Open source does have powerful advantages, starting with those vendors that can cultivate an active community that adds value to their products. And most vendors capture value by helping customers run their software, whether in public clouds or as managed services on customer premises. But many vendors are facing challenges in executing their “land and expand” go-to-market strategies. Enterprise software still needs an enterprise sales force. Getting widespread adoption within an enterprise requires negotiating commercial terms with senior members of IT and procurement departments. And widespread adoption also requires technical skills to help IT fit the technology into their evolving architectures. The cost of this type of enterprise sales force hasn’t changed in many years. Account teams still need quotas of $1.5M or more to support them. Open source has deflated price points somewhat. But more problematic is that many vendors have narrow product lines that can’t support reps with those quota levels. Years of robust private market funding levels have papered over this issue. The stock market is already pushing vendors who have gone public to prioritize getting to profitability over pursuing growth at all costs. That mandate will start to spill over to private firms. As a result, either these firms will start to merge with each other or some of the public incumbent vendors will start to acquire them.

Wikibon Prediction For 50 years enterprise application vendors built and sold packages that had little variation between customers. A high-cost, high-touch direct sales force delivered efficiency value proposition with the applications. In 2018 application vendors will realize these conventions are no longer valid.

Traditionally, forms-based data capture, business process rules, and production reporting made it possible to package the same application functionality across customers, including industry-specific functionality. That all changes with machine learning. Vendors of enterprise applications built on machine learning have to approach development as a semi-custom exercise with each customer. The predictive and prescriptive models that are the heart of the new applications need data from each customer in order to work. But each customer rarely has the same data as their peers, especially for industry-specific functionality. As a result, application vendors’ models get progressively higher in fidelity with each successive customer. This method of building ever richer models means that vendors have to maintain versions that prior customers can upgrade to without breaking their version of the applications. On the go-to-market side, the value proposition changes from what was typically an efficiency message to one of dramatic improvements in business outcomes. Application vendors will also increasingly sell in conjunction with a strategic partner in any given industry. The customer as partner will likely be a supply chain anchor and the application will be part of the value add they deliver to their customers. Delivering these new business outcomes is already making machine learning applications, led by IoT, more strategic than any IT investments customers have made in decades.

Wikibon

Prediction

2018 will be the year that IT Operations and Application Performance Management becomes the first high-volume, horizontal application of machine learning.

The categories of software that used to roll up into systems management are reaching the breaking point.  The stringent SLAs required for applications that don’t just run the business but *are* the business require machine learning. These SLAs require high fidelity management of complexity, scale, and timeliness that humans can no longer provide. If Microsoft used the same ratio of database administrators (DBAs) for Azure SQL DB and Azure SQL DW as its customers use with the on-prem versions, Microsoft would need many tens of thousands of DBAs on staff. Today, most of that work is automated. Managing end-to-end applications creates even more demands. Applications can grow to encompass hundreds or thousands of services, each of which might have many ephemeral instances elastically deployed across shared infrastructure.  We need new approaches to managing this sprawl. Machine learning built into the management software can model how the services behave and how the services interact with the infrastructure. With this critical new ingredient, management software can deliver the sufficiently high-fidelity root cause analytics that makes possible the automated remediation of problems.  This automated, real-time, self-adjusting analytics is “training wheels” for building IoT applications. The big tech companies have deployed these management applications for their own applications and services. The public cloud vendors will start offering these capabilities for customer workloads in 2018. At that point, it will be clear that all  cloud-native workloads need these capabilities.

Action Item

Systems of Agency are going to open the way for new business outcomes, even new business models for mainstream enterprises. But these enterprises need to assess realistically their skills inventory. Few can build the new applications on their own. They will need tech-centric vendors as their partners. And mainstream enterprises have to be cautious in committing too deeply to open source tools and platform vendors with unproven business models. Finally, mainstream enterprises and tech-centric vendors alike must deploy ITOps and APM infrastructure based on machine learning in order to support responsive applications that can never go down.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content