Kubernetes helps orchestrate and automate tasks associated with containers - an essential need as you scale. Here's a deep dive for IT leaders on what Kubernetes can do, key terms, best practices, trends for 2020, and more.
Kubernetes: 3 ways to get started
How can you get started with Kubernetes? Maybe you’re ready to try an experiment – or move from sandbox to production? Use these three stepping stones for individuals and teams
3. Get a handle on application fit for Kubernetes
Like most tools, Kubernetes isn’t a fit for everything. So a useful and necessary Kubernetes path for teams to travel often begins with the question: What are we going to use this for? One guiding principle: Just because you can use Kubernetes to run a particular application doesn’t always mean you should.
“Essentially, Kubernetes offers an orchestration platform to any application capable of running inside containers,” Middela says. “But all of them might not support features such as scaling up/down [or be] tolerant to application restarts. [The application] should be quick enough to respond to a liveliness probe and should be stateless in approach.”
Vempati recommends using Kubernetes’ key capabilities as criteria for evaluating application fit. “To identify good application candidates that can operate in Kubernetes environment, it is important to understand the strengths of Kubernetes,” he says. He lists some key features to consider as example criteria:
- Auto-scaling of containers
- Scale-out capabilities; e.g., dynamic addition of nodes at runtime
- Automated container deployment
- Automated provisioning/instantiation of containers
- Scheduling containers to run at/before/after a particular time
If a particular application won’t really benefit from those kinds of capabilities, it might not be worth the effort. But plenty of apps might. Vempati shares several examples of application characteristics (or requirements) that indicate a potential fit.
- Container-hosted web applications that require automatic scaling based on varying user base. Just how much variance? “Say, a swing between 10 concurrent users to 100 concurrent users,” Vempati says, for an example rule of thumb.
- Multiple container-hosted web services (microservices) that require different scaling levels. “Scalability” tends to be used as a blanket term, but in reality, different services can have considerably different, specific scaling needs, and Kubernetes can help keep these differences manageable. Vempati unpacks a basic scenario here: “Service A needs to cater to 1000 concurrent requests, but Service B only caters to less than 100 concurrent requests. Service A and Service B scale on the same cluster, but differently.”
- Applications in the DevOps domain: Apps built and supported by DevOps teams tend to overlap other characteristics (such as microservices) mentioned here, and Kubernetes is often noted as a compelling option for continuous integration/continuous deployment (CI/CD) environments.
- Applications (e.g., Web apps, microservices) that can be distributed across multiple locations connected on the same network. The “orchestra” metaphor attached to Kubernetes and other container orchestration tools isn’t just a conceit. Microservices and distributed environments (such as hybrid cloud and multi-cloud settings) require the proper tools to keep things operationally manageable.
- Applications that require High Availability (HA): Not all apps are created equal. Apps that require a very high level of availability and resilience, Vempati says, are better suited for Kubernetes than when availability is less of a concern.
You’ll see a pattern in Vempati’s examples: That plain-vanilla monolith is probably not the ideal candidate for Kubernetes. Identifying your own examples where you might be trying to mash together incompatible puzzle pieces is a sign you’re moving in the right direction.
[ Want to learn more about building cloud-native apps and containers? Get the whitepaper: Principles of container-based application design. ]