Formerly known as Wikibon

Developing the Business Logic That Drives Recommendation Engines

Premise. Recommendation engines – also known as “recommenders” – are data-driven systems that drive optimal outcomes in e-commerce and other digital-business environments. Recommendation engines are becoming as fundamental to the next-best-action capabilities of the digital economy as database management systems (DBMSs) long ago became to applications for business operations.

Digital engagement thrives on steady streams of data-powered recommendations being delivered to all decision makers. The key is to build the capabilities that can handle continuous engagement data, turn that data into useful models and insights, and deliver prescriptive guidance to customers and other decision makers in ways that contribute to differentiated business outcomes.

A key approach to establishing these capabilities is through recommendation engines, or “recommenders.” Recommendation engines are as fundamental to emerging online applications as DBMSs are to business operations applications. Within online application architectures, recommendation engines rely on DBMSs to supply the data that is used to build, train, and optimize the data-driven probabilistic business logic—such as predictive, machine learning, social graph, and decision tree models—that helps to tailor and contextualize recommendations for myriad usage scenarios and decision points.

The key difference between recommendation engines and DBMS is this: DBMSs enable organizations to share a common pool of business data across applications, while recommendation engines enable sharing of a common pool of assets that drive optimized business outcomes. For recommendation engines, DBMSs are a key foundation for these outcomes, because they drive the building and tuning of core probabilistic logic—such as predictive models, machine-learning algorithms, and social graphs—used to shape the recommendations. In addition, recommendation engines also rely on such non-data-driven business logic as declarative rules and process orchestration models.

Developing, testing, and deploying the complex logic that drives recommendation engines requires a continuous release pipeline that implements DevOps practices. In order to manage this process effectively, development teams should follow these guidelines:

  • Deliver data-driven recommendations into myriad apps. Recommendation engines drive real-world outcomes through targeted, contextual data-driven guidance that informs the real-time decisions of users in various application scenarios. Developers should prioritize recommendation engines for any digital engagement scenario with monetization opportunities, such as target marketing and customer loyalty programs.
  • Align recommendation engine architectures to the application environment. Within an enterprise application architecture, recommendation engines may serve as general-purpose personalization platforms, as app-specific tactical solutions, or even as embedded features of cognitive chatbots, mobile digital assistants, and other edge applications. For simplicity’s sake and to speed app deployments, developers should prioritize the leveraging of recommender libraries, stored procedures, and equivalent capabilities inside their e-commerce, target marketing, chatbot, and other application environments.
  • Deepen the business logic accessible to recommendation-engine app developers. Depending on the functional scope of consuming applications, recommendation engines may incorporate a thin slice of business logic, such as deterministic rules, or, alternately, a deep stack that includes predictive models, workflow orchestrations, machine-learning algorithms, interest graphs, and more. To ensure consistency across the recommendations delivered through myriad apps in your digital engagement environment, developers should consolidate management of all production logic within a common repository, governance, and versioning environment.

Deliver Data-Driven Recommendations Into Myriad Apps

The fundamental value that recommendation engines provide is real-time, context-sensitive prescriptive guidance to help intelligent decision agents continuously achieve optimized “next best action” outcomes.

Developers should prioritize recommendation engines for any digital engagement scenario with monetization opportunities, such as target marketing and customer loyalty programs. In a customer-facing application environment, such as e-commerce, recommendation engines continuously crunch through various blends of probabilistic math and deterministic rules when prescribing the next banner ad to display, the next marketing promo text to send to your mobile, the next targeted offer to present in your browser, the next suggested complementary product to drop in your shopping basket, and so on. They may also serve recommendations within intelligent applications, such as chatbots, that are embedded within Internet of Things (IoT) edge applications.

Developers should leverage recommendation engines to build data-driven applications that drive customer engagement across myriad apps, processes, touchpoints, and edge devices. When deployed into your multichannel engagement environments, recommendation engines can make all difference in whether you retain customers, grow your base, and deepen the value from those relationships. The engines often assist customers in choosing among auto-ranked products within mobile-shopping apps or call-center representatives guiding customers via auto-personalized marketing scripts.

Typically, recommendation engines deliver their guidance to human decision agents, who, possessing free will, may either follow the guidance, take it under consideration but deviate from it to varying degrees, or reject it entirely. To achieve intended business outcomes—such as preventing customer churn–the recommendations need to be data-driven, personalized, predictive, context-sensitive, and dynamically responsive to the recipient’s real-time decision scenario. This often involves using prescriptive analytics to frame customer decisions as follows: “here’s what we recommend for you based on what that you, and people similar to you, have chosen in circumstances similar to this in the past and are likely to consider the right option in similar future circumstances.”

Recommendation engines consume a steady flow of data and process it all with a wide range of data-driven analytics, ranging from propensity models to clickstream processing of the customer’s portal visits, natural language processing of their social-media communications, geospatial processing of their location coordinates, and graph processing of their dynamic relationships with key influencers. Behavioral analytic processing, if it executes within a low-latency stream-computing infrastructure, can provide 24×7 contextualization of every customer interaction across a multichannel system of engagement.

The same next-best-action technology that delivers recommendations may also, inline to one’s experience in a merchant’s portal, be dynamically adjusting the arrangement of text, graphics, links, and buttons on each successive webpage, as well as supporting texts and emails, to further stimulate customer purchases and return visits.

Align Recommendation Engine Architectures to the Application Environment

Recommendation engines range from simple to complex in architecture, befitting a key piece of application infrastructure that supports diverse real-world use cases. Figure 1 provides a high-level end-to-end overview of recommendation-engine functional capabilities.

 

Figure 1: End-to-End Functional Overview of a Recommendation Engine

Within an application environment, recommendation engines consist of several layers of functional capabilities, which may deployed as either a monolithic architecture (e.g, as embedded components of a DBMS) or as separate runtimes with correspondingly distinct modeling, monitoring, and management tools. For simplicity’s sake and to speed app deployments, developers should prioritize the use of recommender libraries, stored procedures, and equivalent capabilities inside their e-commerce, target marketing, chatbot, and other application environments.

Nevertheless, many enterprise applications have complex, dynamic requirements for in-app prescriptive recommendations that can be best served by layered application environments that leverage diverse runtime platforms—plus corresponding modeling, monitoring, and management tools—for data stores, predictive analytics, business rules management, process orchestration, digital engagement, mobile access, and other functional components.

Developers should consider deploying recommendation engine architectures that are aligned with whichever, of the following architectures, best describes the target application environment:

  • Centralized recommendation engines ensure consistent prescriptive guidance across application siloes. Within an enterprise application architecture, recommendation engines may serve as centralized, general-purpose personalization platforms that serve many applications. Alternately, this functionality—often known simply as “recommender”–may be embedded within operational applications or even within the data stores themselves, driving personalization of ad optimization, target marketing, customer experience management, and other applications native to those platforms. In such centralized deployments, cross-recommendation consistency depends on the degree of centralized governance over the shared data, predictive models, and other key business logic and context that shapes targeted recommendations.
  • Federated recommendation engines optimize distributed value chains. If an enterprise has siloed personalized application platforms serving different users—customers, employees, ecosystem partners, etc.—it may have several corresponding recommendation engines embedded in each. In such circumstances, it’s not uncommon for each application’s recommendation engine to be powered by its own data, machine learning algorithms, predictive analytics, business rules, orchestrations, and other business logic. That fragmentation may be counterproductive in online supply-chains that federate the front-end customer-facing applications of some organizations with the back-office order fulfillment, manufacturing, logistics, and other applications of others. Though their runtimes may be loosely coupled from each other, delivering consistent recommendations across these environments requires cross-domain agreements to federated access to each others’ data, predictive models, declarative rules, and other business logic.
  • Edge-oriented recommendation engines drive mobile and embedded decision support. Increasingly, recommendation services are key to personalized AI-driven applications–such as cognitive chatbots and mobile digital assistants—that may deploy all or some of the enabling functionality in the cloud, in gateways that serve many clients, and/or in edge devices themselves. The edge-oriented embedded recommendation functions may be served from centralized or federated databases, predictive models, and other assets managed in cloud environments. Edge-based recommendations may also draw on real-time sensor data, machine learning algorithms, and other logic that is acquired and processed locally. In such scenarios, cross-edge recommendations can only be as consistent as is supported in the serving environment—centralized, federated, and/or edge-facing—within which the controlling logic is managed.

Deepen The Business Logic Accessible To Recommendation-Engine Application Developers

Recommendation engines vary widely in use cases, flexibility, and sophistication. Some may be powered primarily by a deterministic business rules, while others may tune their recommendations from real-time predictive models and other algorithms that factor in a wide range of data from diverse sources.

Essentially, each recommendation represents a knowledge-driven hunch that was made by data scientists and others when they constructed the analytics, algorithms, rules, and other logic that power it all. Each recommendation rides on specific assertions about the past (“have responded”), present (“we now recommend”), and future (“expect you will accept”) of a particular decision-scenario instance being personalized to a particular customer at a particular time. When that all clicks into place, and the data-driven algorithms have been trained well on relevant data, they can make automated recommendations with high confidence that they will be appropriate and acceptable. However, lacking the right data and the right contextualized analytics, you may not even know whether the simple act of recommending it will offend or alienate various customers (e.g., “I can’t believe they thought I’m the kind of person who goes for that kind of thing”).

In general, the key decision logic that drives the runtime behavior of recommendation engines may include any or all of the following: predictive models, deterministic business rules, workflow orchestrations, machine-learning algorithms, interest graphs, associated metadata, and other artifacts. Just as important, it relies on big data and streaming data feeds to tune the recommendations to the full historical, current, and predictive context of each decision for which a recommendation is to be calculated and delivered in any particular circumstance.  When mapped to a model-view-controller framework, the application logic that powers recommendation engines may be construed as follows:

  • The model of a recommendation engine consists of the predictive models, deterministic business rules, workflow orchestrations, machine-learning algorithms, interest graphs, associated metadata, and other probabilistic and/or deterministic logic that generates recommendations. As the range of prescriptive-guidance scenarios that a recommendation engine serves grow, the burden of maintaining this model logic will deepen commensurately.
  • view of a recommendation engine consists of front-end logic that renders recommendations that are delivered via Web, mobile, social, e-commerce, IoT, and other apps, channels, and touchpoints, and through which recipients of recommendation indicate the extent to which they accept them or otherwise. As recommendation engine functionality becomes a shared resource across multichannel, multimodal application environment, the maintenance burden associated with keeping this view logic optimized will grow.
  • The controller of a recommendation engine consists of the middleware logic that drives the flow of messages, data, and guidance between the front-end view layer and the back-end model layer. Within a federated or edge-oriented recommendation engine environment, this controller logic may occupy a large share of the maintenance burden, whereas in centralized or monolithic recommendation engines it may be embedded in the underlying platform.

To ensure consistency across the recommendations delivered through myriad apps in your digital engagement environment, developers should consolidate management of all production logic—model, view, and controller–within a common repository, governance, and versioning environment.

Action Item. Recommendation engines provide real-time, context-sensitive, next-best-action guidance to customers and other intelligent decision agents. Developing, testing, and deploying the complex logic that drives recommendation engines requires a continuous release pipeline that implements DevOps practices. Wikibon’s principal guidance is for developers to build this deep stack of recommendation logic within a centralized source-control repository and all supporting training data within a scalable, multifaceted data lake.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content