Bracing for a future that involves AI and ever-increasing data sets, CIOs face great cultural challenges.
Kubernetes in production vs. Kubernetes in development: 4 myths
IT teams tinkering with containers and Kubernetes can discover a steep learning curve when their local changes deploy to production. Here's what to know ahead of time.
Myth #3: Orchestration makes scaling a cinch
Reality: While software pros commonly see an orchestration engine like Kubernetes as fundamentally necessary to container scalability, it’s a misnomer to think that orchestration immediately makes it easy when you’re first moving to a production environment.
“Running at scale changes everything,” says Frank Reno, senior technical product manager at Sumo Logic. “The volume of data produced is much more massive and your monitoring needs to scale with your data. The interfaces of all the Kubernetes components cannot be truly realized until you run in production. Ensuring that Kubernetes is healthy and running and that the API server and other control plan components scale to your needs is not automatic.”
Again, dev or test environments can make things seems a little too straightforward; there’s real work involved in ensuring those and other needs and met and maintained over time.
“In dev/test it can be easy to skip some basics, like ensuring you specify the right resource and request limits,” Reno says. “Failure to do that in production can definitely ruin your day.”
Scaling a cluster up or down is a prime example of something that might seem straightforward when you’re experimenting locally, but becomes visibly more challenging in a production environment.
“Production clusters, as opposed to development or staging clusters, have more pains around scaling,” says Rami Sass, CEO of WhiteSource. “Although horizontal scaling of applications is pretty straightforward in Kubernetes, there are a few considerations that DevOps [teams] need to consider, specifically around keeping the services up while scaling the infrastructure. It’s important to make sure that primary services, and mainly alerting systems, such as the ones for vulnerabilities and security, are spread across the cluster’s nodes and work with stateful volumes so no data is lost when scaling down.”
As with other challenges, it’s a matter of proper planning and resources.
“Understand your scaling needs, plan for them, and more importantly: Test!” advises Bhagirathan at Coda Global. “Your production environment will need to withstand much higher loads.”
Myth #4: Kubernetes runs the same everywhere
Reality: Just as there are differences between running Kubernetes on, say, a developer’s laptop versus running in on a production server, there can be differences across environments more generally.
“A common misunderstanding is [assuming that] if Kubernetes is functioning locally, it will work in production anywhere,” says Daniel Lopez, CEO and co-founder of Bitnami, who points to differences among some cloud platforms as examples of how this assumption might fail. “While Kubernetes if effective in providing consistent environments, there are still significant differences between vendors.”
Lopez also notes that deploying in production requires components that are not typically present locally, such as monitoring, logging, certificate management, and credentials. You’ll need to account for those; this is one of the key issues that widens that gap between dev/test and production environments.
This is also an arena where, again, it’s not just a Kubernetes consideration, but rather containers (and microservices) more broadly, especially in hybrid cloud and multi-cloud environments.
“Public-private deployments are trickier than they might seem on paper, since many of the necessary services, like load balancers and firewalls, are proprietary. A container that runs just fine in the lab might not run at all – or at least not safely – in a cloud environment with a different set of tools,” says Rajagopalan of Avi Networks. “That’s why service mesh technologies like Istio are getting so much attention. They make the same application services available wherever your container is running, so you don’t have to think about infrastructure – which is the whole point of containers in the first place.”
[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]