Kubernetes: 6 secrets of successful teams

Kubernetes will manage container and application complexity for you, but do you know how to play your part? Here are six significant things that high-performing teams do when successfully running Kubernetes in production
722 readers like this.
Kubernetes

Kubernetes has developed a reputation as both a powerful but potentially complex platform for managing modern applications and infrastructure, and not without reason. It’s a serious tool, but there’s a learning curve.

“Kubernetes is complicated, especially if you expect zero downtime deployments,” says Matthew Dresden, director of DevOps at Nexient.

Kubernetes isn't necessarily designed for plain-vanilla workloads that don’t need much in the way of operational overhead.

Complicated can be good, in the sense that Kubernetes isn’t necessarily designed for your plain-vanilla workloads that don’t need much in the way of operational overhead. Rather, Kubernetes is a means to effectively managing the complexity that comes with cloud-native software and the infrastructure it runs on. If you want to tame complexity, the tools you use to do so might be complex themselves.

Furthermore, a production Kubernetes deployment is about much more than just Kubernetes, according to Red Hat technologist evangelist Gordon Haff. The open-source Kubernetes project brings together a variety of cloud-native tooling and technologies that work in total to create a container platform. However, says Haff, “In practice, few enterprises will want to assemble all the continuous delivery, monitoring, tracing, service mesh, and other components that pair with Kubernetes to make up a production platform for writing and deploying cloud-native applications.”

For organizations that want to rely on someone who has already integrated and tested the platform for them, that’s where commercial platforms come in.

[ Read also: OpenShift and Kubernetes: What's the difference? ]

Kubernetes success can be approached in several ways and is very much feasible. So we asked Dresden and other experts to spill the secrets of successful Kubernetes work. Here are six significant things that high-performing teams do when it comes to successfully running Kubernetes in production.

1. They start small with Kubernetes and plan big

For most organizations and teams, it does not make sense to go gung-ho into a full-blown production deployment with a high-visibility application. This isn’t a simple case of “we used to use that tool, and now we use this tool.” You need to budget appropriately for experimentation and learning or risk getting burned in a baptism by fire.

“[High-performing teams] start small with [their] Kubernetes deployment and try it with non-critical applications first,” says Nilesh Deo, director of product marketing at CloudBolt Software. “Once they understand how things work, they can scale up and drive towards more automation.”

There are plenty of practical ways to start learning and tinkering; check out our recent article on three ways to get started with Kubernetes for some actionable ideas. These include Minikube, an open source tool that enables individuals and teams to quickly begin running a cluster on a local machine.

[ For more on Minikube, check out: Minikube: 5 ways IT teams can use it.]

“Do not jump in with both feet [right away],” Deo advises. “Spend time understanding Kubernetes better and how it will change the architecture, strategy, and culture.”

If you have big long-term plans for Kubernetes, plan accordingly. It’s software, not sorcery.

“You need to approach Kubernetes as you would any well-planned software development project – it’s not a ‘set it and forget it’ operation,” Dresden says. “That means good software design and practices around orchestration of the cluster, its hosted applications, and mandatory automated testing as part of the change process. Otherwise, K8s and its hosted services can break in surprisingly painful ways where it doesn’t magically recover.”

2. They think deeply about day two and beyond

It may once again be tempting to jump into production with abandon once you’re comfortable with the basics, so let’s double down on planning for a minute.

“Don’t rush to your first production Kubernetes deployment just because it’s easy,” Dresden says. “Put in the work upfront to architect and automate, test, and measure everything, and ensure your change processes are common knowledge, not tribal knowledge.”

Dresden and others point out that the temptation to run (or sprint) before you’ve learned to walk can be particularly strong with Kubernetes, in part because multiple providers make it relatively easy to get up and running in a managed environment. But that’s Day 1. You’re still on the hook to ensure you’re able to effectively manage your cluster and nodes on Day 2, Day 3, and … well, you get the idea.

6 questions to ask to avoid fires in production with Kubernetes

“What comes after that first deploy – that is the tricky part, especially if your apps are complex and stateful or require daemonsets,” Dresden says. He lists a half-dozen questions as examples of how successful teams think about their Kubernetes environment before they’re putting out fires in production:

  • How do we scale the cluster vertically and horizontally?
  • How do we upgrade the K8 control plane and its managed nodes?
  • How do we increase storage or make subnets bigger?
  • How do we deploy updates to the applications?
  • How do we track and respond to latency?
  • How do we test everything I just mentioned?

[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]

3. They simplify, standardize, and automate wherever possible

You can use the best and brightest tools available – Kubernetes among them – throughout your software pipeline, but that alone won’t ensure success. That’s especially true if increased speed and frequency of deployments are among the measures of your success, not to mention such “minor” details such as ensuring the code you’re deploying is reliable and secure.

“The complexity of integrating and deploying cloud-native software outstrips the ability of developers to manage [on their own] when producing updates in days or hours,” says Gadi Naor, founder and CTO at Alcide. “Successful organizations have learned to augment their CI/CD practices by installing automation wherever possible.”

Automation can help many areas of your software pipeline: Witness the growing use of Kubernetes Operators.

Naor notes that automation can be applied to many areas of your pipeline and points to examples such as automated systems to scan code for bug-free integration, automatically systems to deploy code into production, and tools to automatically scan the software development pipeline for security drifts or suspicious behavior.

“Automation, applied judiciously to the CI/CD pipeline, can greatly simplify the CI/CD process for developers, ensuring a smoother development process and higher velocity code production,” Naor says.

Another way to think of automation here is as a means of simplifying and standardizing, which are also key to Kubernetes success, according to Yossi Jana, DevOps team leader at AllCloud.

“A strong team simplifies the whole deployment lifecycle,” Jana says, offering a particular perspective on what this can look like: “This is done by using customized infrastructure-as-code templates for the process of building and deploying on top of a cluster with GitOps, and having the process of getting the desired state to the current state without human intervention.”

Kubernetes success, especially at any kind of significant scale in a production environment, depends in part on making the most of its declarative capabilities – i.e., desired state – and automation possibilities. Growing interest in Kubernetes Operators is a good indicator of this, especially in terms of automation.

“The goal of the organization is to standardize the way they operate around containers, Kubernetes, microservices, and CI/CD,” Jana says.

But resilience matters a lot too: