Cloud computing has become the principal paradigm for enterprise applications. As businesses modernize their computing and networking architectures, cloud-native architectures are the principal target environments.
As a result, enterprise deployment of all-encompassing cloud computing is accelerating. Many enterprises have embarked on a journey to computing in and across a multiplicity of clouds. These are the chief steps on this journey:
Zero in on core cloud use cases
The first step on the cloud journey is to identify the chief use cases that your enterprise plans to address and the benefits to be realized.
Chief among these is the ability to field a full information technology environment without the need for capital expenditures, maintenance of an on-premises data center or in-house technical staff. Other benefits of cloud computing include on-demand self-service provisioning of applications, compute, storage and other IT resources. These resources can be made available over the network and accessible through standard mechanisms by diverse client platforms.
In typical cloud, resources may be provisioned so that multiple tenants and their computations and data can be guaranteed isolation from one another. They can be pooled in a location-independent fashion to serve multiple multitenant customers, enabling resources to be dynamically assigned and reassigned on demand and to be accessed through a simple abstraction. They can be scaled rapidly, elastically and automatically. Generally, cloud resources can be monitored, controlled, reported and billed transparently.
Decide what type of cloud suits your requirements
When an enterprise has decided to implement cloud computing, they must decide whether to make it purely an internal, on-premises private cloud available only in their own data centers, or whether to outsource it from a public cloud provider and to what extent.
If the cloud resource is to be obtained from third-party public-cloud providers, the enterprise must decide the extent of what’s being outsourced. The enterprise may decide to outsource just one application, or entire suites — such as customer relationship management and enterprise resource planning — from one or more software-as-a-service providers.
Alternatively, the enterprise may decide to outsource operating system and middleware platforms, such as those that leverage containerized and serverless interfaces, from platform-as-a-service providers. Or users may turn to infrastructure-as-a-service providers to acquire on-demand access to preconfigured compute, storage and other hardware resources on-demand via a virtualized interface or hypervisor.
Determine the optimal cloud topology
An enterprise cloud resource may operate out of one or many data centers in various topologies. The more complex the enterprise multicloud, the more likely it is that it will be implemented as a service mesh that sprawls across disparate on-premises and public cloud environments.
Advances in cloud-native industry service-mesh initiatives — most notably, Istio, Envoy and LinkerD — will boost the salience of these projects in enterprise multiclouds going forward. Many enterprises will bring service meshes into the core of their efforts to build flexible bridges between containerized on-premises assets and a growing range of public and private cloud fabrics in their distributed computing environments.
Cloud providers are ramping up their support for managed services that simplify interconnection and management of thousands of virtual private clouds and on-premises networks over mesh and hub-and-spoke architectures. Vendors are also bringing a wide range of innovative edge gateways, on-premises computing/storage racks and device-level container runtimes to market over the past year. Increasingly, these innovations will converge into an edge-facing, distributed and federated cloud-native computing fabric that enables more flexible distribution of data, applications and workloads closer to the point of agency.
As the “internet of things” becomes the predominant on-ramp to cloud computing, the notion of a “data center” will give way to a radically decentralized “software-defined data center.” Within this cloud-to-edge computing fabric, blockchain and other hyperledger backbones will evolve to provide an immutable audit log for all network-, system- and application-level operations.
Prepare a cloud migration plan
When enterprises decide to implement cloud computing, they must consider the cost, complexity, time and technical resources needed to migrate existing applications, workloads and data to the target environments.
If the enterprise is migrating to a private or public cloud platform offered by an existing IT provider, they may be able to rely upon the migration tools and professional services offering by that company. If the target environment is a public cloud, the migration may be a bit trickier. Public cloud providers put the highest priority on offering migration tools, multicloud backplanes and professional services to help enterprise execute these migrations rapidly, cost-effectively and with minimal risk.
For application migrations, enterprises may be able to containerize legacy workloads without needing to rewrite existing applications, thereby mitigating the technical risks normally associated with complex migrations. For data migrations, enterprises should use the natural patterns of data storage, processing and workload placement as guidance for organizing cloud-based business systems in order to ensure the requisite availability, performance, security, protection, compliance and other service-level mandates for all assets moved to the cloud.
Consider the relevant cloud application development abstractions
As enterprises migrate more assets into the cloud, they’ll require development abstractions that enable to build microservices and other applications that run on virtualized, containerized and serverless platforms. Enterprise customers are demanding the ability to compose cloud-native microservices for execution across various orchestrated blends of virtual machines, containers and serverless fabrics.
Going forward, Wikibon predicts that more development tools will converge these heretofore distinct programming silos and enable DevOps across increasingly heterogeneous multiclouds. More cloud-computing environments will federate Kubernetes clusters while presenting serverless interfaces for lightweight development of stateless, event-driven microservices.
Key to this convergence of application paradigms will be “infrastructure-as-code” tooling that enables declarative specifications of cloud-native application outcomes to drive automated compilation and deployment of the requisite containers, serverless functions, distributed orchestrations and other application logic.
Manage your cloud through comprehensive tooling
No matter what the uses, topologies, abstractions and sources enterprises choose to incorporate into their cloud roadmap, they will need management tooling to inventory, monitor, administer, optimize, secure and control it all.
As two or more public and private clouds are incorporated into applications, multicloud management tooling is an essential component of the enterprise roadmap. More enterprises are insisting on a consistent “true private cloud” experience that spans hybrid and multicloud environments through a single point of purchase, management, support, maintenance and upgrades.
Automation of many cloud management tooling through embedded machine learning will become increasingly essential. There is a growing range of commercial tools on the market on the market for automated management of virtualized, containerized and serverless multiclouds through the proverbial “single pane of glass.” Enterprises should be able to easily schedule, manage, monitor, load-balance and secure containerized workloads that are automated throughout the multicloud.
Multicloud management tooling should allow enterprises to execute applications in the public clouds of their choice while retaining the ability to keep any or all workloads on-premises. It should provide a unified repository for storing container images and prepackaged versions of the most commonly used software components across the multicloud. IT administrators should be able to sync up their on-premises and public cloud Kubernetes configurations so applications may be moved among them without requiring major changes and without developers needing to know the configurations of target environments.
Wikibon sees a trend under which cloud providers will be able to support updates to routing and traffic management functions through containerized logic that is distributed through DevOps workflows. This will allow enterprises to deploy rapidly only those networking features required everywhere in their multiclouds. This will have the benefit of reducing the complexity of networking routing and policy updates. It will also reduce risks by enabling more rapid and consistent updates to routing, policy and security rules throughout the multicloud.
Another key step in the enterprise cloud journey is the IBM Think 2019 conference, which will take place Feb. 12-15 in San Francisco. Registration is here. Visit this site to register for cloud curriculum programs at Think 2019. Go to the IBM Professional Certification Program to certify your skills and accelerate your career, and sign up to take a certification exam.
Please join us on the #Think2019 CrowdChat, “The Journey to Cloud,” at noon EST Thursday, Jan. 24.
And don’t forget to tune into theCUBE for live interviews with IBM executives, developers, partners and customers during Think 2019.