CIOs wish for simpler ways to wrangle data and experiment with business models – but change remains hard to scale. Also, it may be time to stop chasing “alignment.”
7 pieces of contrarian Kubernetes advice
IT leaders and Kubernetes users share some advice that goes against the grain – because conventional wisdom doesn’t always work
4. It will take time to get automation right
A basic selling point of Kubernetes and orchestration in general is that it makes manageable the operational burden of running production containers at scale, largely through automation. So this is one of those times where it’s best to be reminded that “automation” is not a synonym of “easy,” especially as you’re setting it up. Expect some real effort to get it right, and you’ll get a return on that investment over time.
“Kubernetes is a wonderful platform for building highly scalable and elastic solutions,” Vempati says. “One of the key [selling points] of this platform is that it very effectively supports continuous delivery of microservices hosted in containers for cloud scale.”
This sounds great to any IT team working in multi-cloud or hybrid cloud environments, especially as their cloud-native development increases. Just be ready to do some legwork to reap the benefits.
“To support automated continuous delivery for any Kubernetes-based solution is not [as] simple as it may [first] appear,” Vempati says. “It involves a lot of preparation, simulation of multiple scenarios based on the solution, and several iterations to achieve the [desired results.]”
5. Be judicious in your use of persistent volumes
The original conventional wisdom that containers should be stateless has changed, especially as it has become easier to manage stateful applications such as databases.
[ Read also How to explain Kubernetes Operators in plain English. ]
Persistent volumes are the Kubernetes abstraction that enables storage for stateful applications running in containers. In short, that’s a good thing. But Vempati urges careful attention to avoid longer-term issues, especially if you’re in the earlier phases of adoption.
“The use of persistent volumes in Kubernetes for data-centric apps is a common scenario,” Vempati says. “Understand the storage primitives available so that using persistent volumes doesn’t spike costs.”
Factors such as the type of storage you’re using can lead to cost surprises, especially when PVs are created dynamically in Kubernetes. Vempati offers an example scenario:
“Persistent volumes and their claims can be dynamically created as part of a deployment for a pod. Verify the storage class to make sure that the right kind of storage is used,” Vempati advises. “SSDs on public cloud will have a higher cost associated with them, compared to a standard storage.”
6. Don’t evangelize Kubernetes by shouting “Kubernetes!”
If you’re trying to make the case for Kubernetes in your organization, it may be tempting to simply ride the surging wave of excitement around it.
“The common wisdom out there is that simply mentioning Kubernetes is enough to gain someone’s interest in how you are using it to solve a particular problem,” says Glenn Sullivan, co-founder at SnapRoute, which uses Kubernetes as part of its cloud-native networking software’s underlying infrastructure. “My advice would be to spend less time pointing to Kubernetes as the answer and piggybacking on the buzz that surrounds the platform and focus more on the results of using Kubernetes. When you present Kubernetes as the solution for solving a problem, you will immediately elate some [people] and alienate others.”
One reason they might resist it or tune out is that they simply don’t understand what Kubernetes is. You can explain it to them in plain terms, but Sullivan says the lightbulb moment – and subsequent buy-in – is more likely to occur when you show them the results.
“We find it more advantageous to promote the [benefits] gained from using Kubernetes instead of presenting the integration into Kubernetes itself as the value-add,” Sullivan says.
7. Kubernetes is not an island
Kubernetes is an important piece of the cloud-native puzzle, but it’s only one piece. As Red Hat technology evangelist Gordon Haff notes, “The power of the open source cloud-native ecosystem comes only in part from individual projects such as Kubernetes. It derives, perhaps even more, from the breadth of complementary projects that come together to create a true cloud-native platform.”
This includes service meshes like Istio, monitoring tools like Prometheus, command-line tools like Podman, distributed tracing from the likes of Jaeger and Kiali, enterprise registries like Quay, and inspection utilities like Skopeo, says Haff. And, of course, Linux, which is the foundation for the containers orchestrated by Kubernetes.
[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]