Formerly known as Wikibon

AI’s automation flywheel spins greater human productivity


Artificial intelligence-driven automation is taking hold everywhere, though many people fear that this trend will put people out of jobs en masse.

How realistic is this worry? Bear in mind that many automation-related vendor announcements highlight a “human-in-the-loop” capability that bodes well for continued employment in the affected industries or professions.

Wikibon believes that “human-in-the-loop” architectures are here to stay and will remain the core of many enterprise automation initiatives. For example, most automated machine learning environments have core “’human in the loop” dependency that’s not likely to go away anytime soon.

As discussed in a recent piece by Bill Vorhies of Data Science Central, many next-generation ML automation tools blend algorithmic and manual processes in order to boost the accuracy of various tasks that no bot or human, no matter how expert, can handle as well unassisted. What he describes is the typical flow that Wikibon has found in many ML automation tools on the market:

  • Provides a front-end user interface that relies on data scientists and other human experts to review and correct specific false-negative or false-positive instances of data that had been machine-scored as anomalous;
  • Automatically feeds back the human-corrected scores into the training set to improve the model’s accuracy on future runs, in a data-science method known as semi-supervised learning;
  • Automates data cleansing, preparation, feature engineering, feature selection and model hyperparameter tuning;
  • Automates scoring of multiple models in parallel and scoring of them as best-fit champions and lesser-fit challengers.
  • Automates continuous model learning from atomic-level streaming data, with each model being continuously trained and updated from newly received streaming data items;
  • Automatically promotes a new best-fit algorithm to in-production champion status, or, with gradual model decay, an updated version of the current champion will be produced and kept in service;
  • Automatically identifies new predictive features that a human expert may have never considered;
  • Presents a visualization of the model feature sets for users to explore, manipulate and assess trade-offs between interpretability and accuracy;
  • Presents user with a visualization of both the champion model and the nearest proposed challenger; and
  • Allows users to manually promote revised models into production if appropriate.

More broadly, organizations are automating data cleansing, transformation, integration and curation to the hilt, freeing up data scientists to develop high-powered AI for disruptive business apps. As noted in this recent Information Management article, these business productivity advances are due to improvements in “automating aspects of the wrangling process, expediting data quality measures and making these functions both repeatable and easily shared with other users.”

Wikibon has seen these advances in our own coverage of the data management market. More data professionals are adopting an industrial-grade work discipline focused on repeatable pipelining of patterned tasks, such as ML-driven data integration and model training, within continuous DevOps workflows. There is no sign that this trend will throw data management professionals out of work because they will remain a critical player in closed-loop exception handing, quality assurance and other irreplaceable functions.

This is especially evident in information technology operations and infrastructure management, where ML helps IT administrators stay ahead of the rising tide of infrastructure data flowing in real time all hours of the day and night. Automating distillation of those data-driven insights enables IT infrastructure managers to oversee more root cause analysis, workload management, performance monitoring and other key operational tasks. Once again, many of these functions absolutely require expert humans to remain engaged to manage exceedingly complex tasks that will not be amenable to 100 percent automation.

Check out this webcast I recently did with Ryan Davis, a senior product marketing manager at ExtraHop Networks, in which he discusses using ML to augment an infrastructure management team’s productivity. Automating distillation of data-driven insights grows increasingly difficult as the scale and complexity of the infrastructure grows. ML is a key tool for in-the-loop technical managers to tap into the benefits of automation while scaling up their ability to do leverage their human expertise over expanding infrastructure, workloads and business applications.

Ryan’s colleague Matt Cauthorn recently spoke about the benefits of ML-driven IT infrastructure management automation on theCUBE at the RSA Conference in San Francisco:

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content