Formerly known as Wikibon

At KubeCon, the Kubernetes community discusses its growing pains

The Kubernetes market is on a fast growth path, but it’s also experiencing the issues that beset any rapidly evolving technology that’s being built out and refined by a global community.

This week at the annual KubeCon + CloudNativeCon North America 2018 event in Seattle, SiliconANGLE Media’s livestreaming video studio theCUBE interviewed a wide range of Kubernetes movers and shakers: solution providers, developers and users. In addition to discussing the technology’s many exciting applications, the interviewees drilled down into issues that, if not addressed systematically across the community, may stall its momentum.

Chief among these issues are simplifying the Kubernetes stack, assessing the maturity of commercial Kubernetes solutions, securing the Kubernetes open-source platform’s considerable attack surface, implementing serverless interfaces on top of Kubernetes container orchestration environment, identifying suitable use cases for Kubernetes service meshes and integrating Kubernetes orchestration into the network routing and management backplane.

Here are some of the most interesting comments on those topics, edited for clarity, from interviewees at KubeCon.

Simplifying the Kubernetes stack

Enterprise adoption of Kubernetes will stall if the sprawling ecosystem of open-source tools isn’t converged into certified stacks that are easy to set up and manage. According to IBM’s Chris Rosen:

“It’s a very complex stack. We’re trying to … simplify managing that stack. So, at the top of the stack, of course we’ve got Kubernetes for the orchestration layer. Below that, we’ve got the engine. We’re using containerd now, but we also have Prometheus, Fluentd and Calico. When you think about managing that and a new version comes out from Kubernetes, how does that affect anything else in that stack? We’re not forking and doing anything IBMesque with Kubernetes. Now, what we do is, we build our solutions on top of these open-source projects, adding value, simplifying the management of those solutions.”

Assessing the maturity of commercial Kubernetes-based cloud-native computing platforms

Kubernetes’ incorporation into complex enterprise cloud computing platforms demands an industrywide focus on certifying these stacks as being ready for robust, scalable enterprise production applications. It also requires that Kubernetes-based environments be certified as suitable not just for greenfield cloud-native applications but also as “lift-and-shift” targets for migrating legacy enterprise workloads. According to IBM’s Daniel Berg:

“When you really start hitting like real scale, real scale being like 500, 1,000 or a couple thousand nodes, then you’re hitting real scale there. And we’re dealing with tens of thousands of clusters. You start hitting different pressure points inside of Kubernetes, things that most customers are not going to hit and they’re gnarly problems. They’re really complicated problems. One of the most recent ones that we hit is just scaling problems with Custom Resource Definition. [When] it starts to hit another pressure point that you then have to start working through scaling of Kubernetes…. dealing with scheduling problems. Once you start getting into these larger numbers, that’s when you start hitting these pressure points and, yes, we are making changes and then we’re contributing those back up to the upstream.

“For us, the key value here is first of all providing a certified Kubernetes platform. Kubernetes has matured. It has gotten better. It’s very mature. You can run production workloads on it, no doubt. We’ve got many examples of it, so providing a certified managed solution around that, where customers can focus on their application and not so much the platform, is highly valuable. Because it’s certified, they can code to Kubernetes. We always push our teams both internal and external focus on Kubernetes, focus on building a Kubernetes-native experiences because that’s going to give you the best portability, whether you’re using IBM cloud or another cloud provider. It’s a fully certified platform for that.

“We’ve seen almost every type of workload now because a lot of people were asking what kind of workloads can you containerize? Can you move to Kubernetes? Based on what we’ve seen, pretty much all of them can move, and we do see a lot of the whole ‘lift-and-shift and just put it on Kubernetes.’”

Securing the attack surface that Kubernetes-based systems expose

As it begins to pervade every niche of the cloud-native universe, Kubernetes has the potential to become a huge security vulnerability. This stems from many factors, including the heterogeneous implementations of the open-source distribution on the market, the myriad application programming interfaces that Kubernetes-based solutions present, and the immaturity of security solutions that address diverse Kubernetes deployment and configuration scenarios. The unfamiliarity of Kubernetes technology among many security professionals complicates efforts to protect this platform in many enterprises, as does the need to apply patches in a timely, consistent fashion across both newer Kubernetes-based platforms as well to older, legacy cloud application environments with which they interoperate. According to Red Hat Inc.’s Ashesh Badani:

“[Let’s] not forget the things that enterprises care about. Last week we had our first big security issue released on Kubernetes: the privilege escalation flaw. Obviously, we participate in the community. We had a bunch of folks, along with others addressing that, and then we rolled our patches. Our patch rollout went back all the way to version 3.2, which shipped in early 2016.

“Now, the one hand you say, hey, everyone has Dev-Ops, why do you need to have a patch for something that’s from 2016? That’s because customers still aren’t moving as quickly as we’d like. There’s an enthusiasm with regard to, everyone’s quick, everything’s lightning fast. At the same time, we often find … some enterprises will just take a little bit longer.

“What we want to do is ensure … the platform. So we talked about the security lifecycle … supporting these cloud-native next-generation stateless applications, but also established legacy stateful applications all on the same platform. The work we’re doing is to ensure we leave no application behind.”

Implementing serverless with Kubernetes in complex cloud-native environments

Serverless is becoming a key component in cloud-native computing, but its interactions with Kubernetes orchestrated containers are still very much a work in progress. What serverless abstractions such as Knative do is layer event-driven interactions over Kubernetes. According to Google Cloud’s Kelsey Hightower:

“If you’re a Kubernetes user, if you really think about the very broad definition of serverless, [it means] I’m not managing the database, I’m using a managed database, serverless database. For storage, I’m using S3 or Google Cloud storage, serverless. Your load balancer, also serverless. So most people in the Kubernetes ecosystem, networking, storage [and] database [are] serverless.

“The only thing that you can say isn’t serverless is this compute component, everything else is. Now people are looking at serverless as this spectrum. How serverless are you? If you’re on-prem and you buy a server and you rack it and install Kubernetes, you’re less serverless, you’re probably not serverless at all, no matter what you do.

“Now, if you put a lot of work in, you can probably put a serverless interface on top. This is what Knative is designed to do for people. Maybe you have an organization that supports multiple businesses inside of your org. They may not know anything about Kubernetes. You just tell them hey, put your code here, it will run, oh, that feels serverless. You can provide a serverless experience.

“The delta then becomes what can we do between a container and a function. What does it mean to take a container and put it into Lambda? What do you have to change? We’re not talking about throwing away Kubernetes and then starting over our entire architecture. We’re swapping out the compute layer. One is a subset of the other. Lambda is about events and functions, Kubernetes is about container and run it however you want. [If] you want to run it when an event comes in, that’s Knative. When we break it down, you’re just talking about compute.”

Identifying suitable use cases for Kubernetes service meshes

Orchestrated service meshes, such as Istio, will become essential for the more complex, distributed and multiplatform Kubernetes environment. According to Brian “Redbeard” Harrington of Red Hat:

“The beautiful thing is [that] using a service mesh is not anything new at all. I mean, that was really built to top the Netflix OSS ideas. They’ve been around for seven, eight years now. It’s really just kind of decomposing what were a bunch of individual libraries that you had to implement into more infrastructure services, so that you know that you just, regardless of the language, environment, etc., you’ve always got a certain base [mesh] platform ready to go.

“[Whether or not meshes become the ‘new normal’], I think to some extent… depends on the scale that you’re at. If you are at the scale of Yelp… and using Envoy, you already have a good idea of what that mesh is going to look like, so you’re building that control plane in the way that you need it.

“Where Istio and Linkerd and some of the other ones come in is when you are a smaller scale and you need to figure out what your control plane is going to look like. That’s where [mesh] really shines, because it gives you something that you can just start using and has some training wheels on it to make sure that you’ve got a stable platform to use from day one.”

Integrating Kubernetes into network routing and management

Kubernetes is a potential platform for containerized orchestration of functions more deeply into the network and systems layers, but that piece of the cloud-native puzzle is still being worked out by the community. According to Juniper’s Scott Sneddon:

“It’s kind of two worlds colliding and working together: a systems kind of view, almost like operating systems [and] the network systems, all kinds of systems thinking. And then just apps.

“You’ve still got to make things dynamic, you’ve still got latency, and on-premises [is] not going away. You’ve got [the ‘internet of things’], so networking plays a really big [role] as software starts figuring things out [through] Kubernetes.

“Apps [are] going to have policy. The network is always been the foundation of technology or at least for the last 20 plus years. And as cloud has been adopted, we’ve seen network scale drive in different ways. The megascalers that have built infrastructure that we’ve been enabling for quite a while and have been working with those customers as well. We’ve been developing a lot of simplified architecture just for the physical plumbing to connect these things together.

“But what we’ve seen and is more and more important is all about the app, the app is the thing that’s going to consume these things. And the app developer doesn’t necessarily want to worry about IP addresses and port numbers and firewall rules and things like that, so how could we just more simply extract that? So, you know, we’ve been developing automation and aimed at the network for quite a while, but I think more and more it’s becoming more important that the application can just consume that without having to direct the automation at the app.

“And [we’re working with] groups like CNCF [on] Kubernetes [and] network policy. Let’s use cloud-native primitives and then we can translate into the network primitives that we need to deploy to move packets [and manage] IP addresses and subnets.”

To see these and other full interviews from the event, visit theCUBE’s dedicated website, the dedicated Ustream channel or SiliconANGLE’s dedicated YouTube channel.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content