Kubernetes: 5 tips we wish we knew sooner

Kubernetes turns seven this month: That's long enough for a big group of lessons learned. Check out this advice from Kubernetes pros on how to save time and avoid headaches
236 readers like this.
kubernetes trends 2021

Kubernetes is known for many things, most of all as a powerful container orchestration tool. It’s also known for its learning curve when you’re just starting out.

The platform has been around long enough now, however, that individuals and teams who are new to the platform don’t have to figure everything out on their own. There’s plenty of help out there.

[ Related read: Kubernetes: 6 open source tools to put your cluster to the test ]

5 Kubernetes lessons learned

With that in mind, we tapped several experts to help speed up your own learning. We asked an essential question: What do experienced Kubernetes pros wish they’d known sooner? They shared their top tips, lessons learned, and other advice for Kubernetes success – and of course we’re sharing them with you. Let’s look at five things worth learning about K8s as early as possible to save time and potential headaches later.

1. Successful automation requires diligent auditing

One of Kubernetes’ major promises is that it can help automate what would otherwise be unsustainable operational overhead when it comes to running containers at scale. That automation doesn’t necessarily happen, uh, automatically, however. If you’re building a platform from scratch based on the underlying open source distribution, you’ll definitely want to prioritize this issue.

“Raw Kubernetes with its Helm charts and YAML files requires significant scripting and manual effort to deploy,” says Bruno Andrade, CEO at Shipa. “In my own experience with Kubernetes, I believe that if I had to put a dollar in a jar every time I made a mistake writing YAML files, I’d have run out of money long ago! Automation that abstracts away the nuts and bolts of Kubernetes that developers don’t care about is better to jump on sooner than later.”

Look for repetitive tasks as a starting point, such as automatically scanning container images for known security vulnerabilities.

This is not an area where automation removes humans from the loop. On the contrary, productive automation requires regular monitoring and intervention.

“One thing that’s better to learn earlier than later with Kubernetes is that automation and audits have an interesting relationship: automation reduces human errors, while audits allow humans to address errors made by automation,” Andrade notes. You don’t want to automate a flawed process.

It’s often wise to take a layered approach to container security, including automation. Examples include automating security policies governing the use of container images stored in your private registry, as well as performing automated security testing as part of your build or continuous integration process. Check out a more detailed explanation of this approach in 10 layers of container security.

Kubernetes operators are another tool for automating security needs. “The really cool thing is that you can use Kubernetes operators to manage Kubernetes itself – making it easier to deliver and automate secured deployments,” as Red Hat security strategist Kirsten Newcomer explained to us. “For example, operators can manage drift, using the declarative nature of Kubernetes to reset and unsupported configuration changes.” (For more on strategies and open source tools to automate security, read How to automate compliance and security with Kubernetes: 3 ways. )

[ Kubernetes terminology, demystified: Read How to explain Kubernetes in plain English and get our Kubernetes glossary cheat sheet for IT and business leaders. ]

2. Ignore Kubernetes pod labeling at your budget's peril

Kubernetes enables labeling of objects such as pods as a means of applying an organizational scheme to your system. Labels are key-value pairs. As the Kubernetes documentation notes, labels “specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system.”

This might seem at first like an “extra” feature that you don’t need to pay attention to, but you might regret that thought process later on, especially if you’re trying to optimize your spending.

"A word of wisdom that will save you long-term headaches: pick and enforce a pod labeling scheme early on!” says Ajay Tripathy, chief technology officer at Kubecost. “If you don’t, you’ll quickly lose the ability to extract costs at meaningful aggregations.”

There are various strategies for developing your own labeling scheme; use something that makes sense for your particular team or organization.

“Possible routes could be creating pod labeling conventions by application, environment, team/owner and/or customer, as well as relying on namespaces,” Trripathy says. “Doing this – and doing it early on – will help ensure you create the visibility you’ll need across teams, departments, products, customers, etc. Develop your pod labeling scheme and stick with it."

The hands-on Kubernetes by Example site offers a good introduction to labels, too.

3. Understand your applications' resource needs

It may be tempting for developers to treat Kubernetes like any other environment, especially if the team is trying to move fast. This could be OK if you’re tinkering locally or just managing a single application. But if you’re planning to run multiple containerized applications in a scalable production environment? Make sure you understand your resource requirements first, according to Raghu Kishore Vempati, director of technology at Capgemini Engineering.

“Many developers simply create their applications and push them on a Kubernetes cluster. It is important to understand the resources that the applications use,” Vempati says. “For normal developer environments, this is not an issue. But for production-grade environments where many applications are co-hosted this will likely lead to issues in the environment. It is important to properly set the constraints or thresholds for the resources that the applications can use.”

[ Related read: Managing Kubernetes resources: 5 things to remember ]

4. Don't play around with etcd

Vempati shares another lesson that you probably don’t want to learn the hard way: Resist the urge to tinker with etcd, which is a key-value store for distributed systems that functions as a backing store for your Kubernetes cluster information.

“If we create a cluster and configure it in a particular way, that information would go into etcd,” Vempati says. “Developers do backup and restore [of] K8s clusters with the support of etcd. In the process, if they play around with any of the key-value pairs in the etcd store belonging to the cluster, without understanding the consequences of [those changes], the cluster when restored can become dysfunctional.”

Vempati points to the official documentation for a deeper dive on the nuts and bolts of etcd cluster operations. But the thing to keep in mind early is that this is probably not an area where you want to indulge your natural inclination to tinker and experiment.

“It is the backbone for the cluster,” Vempati says. “[If] we toy around with it, the K8s cluster will be useless in no time.”

5. You don't need to go it alone

The burgeoning ecosystem and community around Kubernetes seems to offer something for everyone these days, and that’s generally a good thing.

“Surveying all the projects that complement Kubernetes can make you feel a bit like a kid in a candy store,” says Gordon Haff, technology evangelist at Red Hat. “There’s monitoring, security scanning, service meshes, CI/CD tools, registries, and more. It can be tempting to jump right in, download some software, and start building a container platform. Sure, it all looks a bit complex but how hard can it be?”

Spoiler alert: “Pretty hard, it turns out,” Haff says, adding that this is a major contributor to Kubernetes’ reputation for being complicated. “Many who go the DIY route also come to the realization that their organization is not in the business of building custom container platforms.”

[ Read also: OpenShift and Kubernetes: What’s the difference? ]

The good news: You don’t have to be, because there are robust commercial distributions built by teams that actually are in the business of building container platforms, like Red Hat's OpenShift. That lets your team focus on developing its applications and services rather than the platform they run on. And going this route doesn’t mean you need forgo control and flexibility, either.

“An enterprise Kubernetes distribution is a good way to maintain some choice – for example, where you physically run your Kubernetes cluster or clusters – while having someone whose job it is to build a container platform make prescriptive choices, perform integration testing, and choose sensible defaults,” Haff says. “You still have the option to customize as needed, but get a documented product that takes a lot of the friction out of getting started developing your cloud-native applications.”

[ Want to learn more? Get the free eBook: O'Reilly: Kubernetes Operators: Automating the Container Orchestration Platform. ]

Kevin Casey writes about technology and business for a variety of publications. He won an Azbee Award, given by the American Society of Business Publication Editors, for his InformationWeek.com story, "Are You Too Old For IT?" He's a former community choice honoree in the Small Business Influencer Awards.