Premise
Recommendation engines are a fundamental infrastructure for ensuring that digital applications deliver continuously optimized outcomes. However, organizations can’t ensure that recommendation engines drive these outcomes unless developers and other IT professionals continually train, test, evaluate, and maintain the deep layers of business logic that powers this business infrastructure. In this regard, the key logic typically consists of predictive models, deterministic business rules, workflow orchestrations, and machine-learning algorithms.
Analysis
Recommendation engines increasingly rely on deep layer of application logic that is maintained, trained, and otherwise managed to ensure relevant recommendations in disparate applications. In order to effectively manage this logic, which typically consists of predictive models, deterministic business rules, workflow orchestrations, and machine-learning algorithms, development teams should:
- Implement DevOps over development and maintenance of recommendation-engine logic. The professionals responsible for composing and managing this complex logic typically include application coders, data scientists, business rules developers, orchestration designers, and subject-matter experts. A common approach to working together is necessary for enterprises to collaboratively “learn” how to maintain recommenders, and that approach is DevOps.
- Enforce recommendation-engine logic governance through a centralized repository. The logic and data that comprise recommendation engines must be governed. Typically, that is achieved through a combination of data lake administration, source-code library management, and collaborative tooling for developers.
- Mitigate the governance risks associated with complex attribution of responsibility for recommendation-engine decisions. Recommendation engines behave probabilistically, not categorically. Ensuring appropriate system behavior requires logging precise execution path of each recommendation as well as auto-documenting every revision and addition made to the controlling application logic.
Implement DevOps Over Development And Maintenance Of Recommendation-Engine Logic
Maintaining the complex logic that drives recommendation engines requires a continuous release pipeline that implements DevOps practices.
When governed within comprehensive DevOps processes, the development and maintenance of recommendation-engine logic exemplifies the core pillars of digital business (see Table 1). Key among these is rapid build and test through rapid iteration, training, and testing of predictive models to ensure that the engines drive high response rates, acceptance rates, and other metrics.
In general, the key decision logic that drives the runtime behavior of recommendation engines may include any or all of the following: predictive models, deterministic business rules, workflow orchestrations, machine-learning algorithms, interest graphs, associated metadata, and other artifacts. Just as important, it relies on big data and streaming data feeds to tune the recommendations to the full historical, current, and predictive context of each decision for which a recommendation is to be calculated and delivered in any particular circumstance.
The professionals responsible for composing and managing this complex logic typically include but are not limited to, application coders, data scientists, business rules specialists, orchestration designers, data integration specialists, and subject-matter experts.
In a general sense, these personnel hold collective responsibility for the automated decisions and actions taken by the recommendation engines. They leverage the tools of big data and data science to build decision logic that often produces highly unique, personalized, and situation-specific recommendations and guidance. And they write logic that drives interactions on the portal, in the call center, in customers’ smartphone interfaces, and in other channels.
Keeping these recommendation-generating models trained, tuned, and accurate is the core operational responsibility of data scientists. In a comprehensive development framework, recommendation-engine models should leverage such sophisticated data-science practices as strategy maps, ensemble modeling, champion-challenger modeling, real-time model scoring, collaborative filtering, constraint-based optimization, automatic best-model selection, A/B testing, and real-world experimentation.
When governed within comprehensive DevOps processes, the development and maintenance of recommendation-engine logic exemplifies the core pillars of digital business. Table 1 presents these pillars.
Table 1: Recommendation Engine DevOps Supports the Pillars of Digital Business
Enforce Recommendation-Engine Logic Governance Through A Centralized Repository
In an ideal DevOps environment, all recommendation logic—models, rules, graphs, etc.—would be governed within in a source-control repository; all the supporting data governance should be managed within a data lake; and all developers implement all project governance within an integrated collaboration environment.
From a platforms and tooling standpoint, this multilayered governance would require that the development team to share access to key tools and resources. Table 2 presents these governance enablers.
Bear in mind that it might be difficult, costly, or impractical for some organizations to migrate toward this ideal governance structure for management of recommendation-engine logic. This might be the case if an organization maintains distinct, siloed platforms for predictive analytics, machine learning, business rules, orchestration models, and other logic assets. This is often the case when these various assets are associated with siloed application environments managed by separate functional groups and lines of business.
Table 2: Governance Enablers for Recommendation-Engine Business Logic
Mitigate The Governance Risks Associated With Complex Attribution Of Responsibility For Recommendation-Engine Decisions
When it comes to attributing responsibility to outcomes to which recommendation engines contribute, the task can become murky. Recommendation engines operate under logic that is so complex, and based on so many variables, that their precise future behavior in all scenarios may not be predictable by any one human in advance. As it grows more complex, this logic may become more opaque, in terms of any human—including the data scientists and subject-matter experts who built it—being able to specifically attribute any resultant recommendation, decision, or action that it drives to any particular variable.
That algorithmic opacity raises key governance questions with legal, regulatory, economic, and cultural consequences:
- Who is accountable if the recommendation that an engine automatically dishes out in a particular circumstance is incredibly wrong, stupid, or counterproductive?
- Can anyone personally be held accountable for the decisions they make?
- How transparent is the authorship of the data and rules that drive algorithmic decision-making processes?
- Who, if anybody, is responsible if the recommendation is closer to the next-worst-action spectrum of things?
- Does holding well-meaning data scientists responsible make sense if their models, deployed into production applications, produce unintended adverse consequences?
Mitigating these risks requires that your recommendation-engine DevOps practices address an emerging requirement known as “algorithmic accountability.” As recommendation engines incorporate deeper, dynamic stacks of predictive logic, it will become near-impossible to document the precise explanatory narrative that drives every recommendation being made in all circumstances. For that reason, recommendation-engine DevOps professionals will need to adopt tools that both log the precise execution path of each recommendation as well as auto-document every revision and addition made to the controlling application logic. This would enable DevOps teams to persist these auto-generated narratives as persistent artifacts for downstream e-discovery, compliance, and other governance purposes.
Action Item
Recommendation engines provide real-time, context-sensitive, data-driven guidance to intelligent decision agents. Wikibon’s principal guidance is for CIOs to establish an enterprise-spanning governance structure within which predictive analytics, machine-learning models, business rules, social graphs, and other recommendation-engine logic assets can be managed as a unified resource. As this structure takes shape—with associated roles, workflows, and policies to flesh it out—DevOps professionals should implement a centralized source-control repository and scalable data lake to ensure lifecycle management of all key assets in the recommendation-engine application pipeline.