At a time when technologies and market conditions can change on a dime, it doesn’t make sense for companies to craft five-year strategic plans. Here’s what they should do instead
Migrating applications to containers and Kubernetes: 5 best practices
Let’s examine key considerations for migrating existing applications to containers and Kubernetes, according to experts
We recently shared 7 best practices on building applications specifically for containers and Kubernetes. But what if you’re considering containerizing an existing app? This comes with a different set of considerations and recommendations.
Among other advice we’ll get to momentarily: Just because you can do it doesn’t always mean you should, and that evaluation is a must-do for just about any team. If you do determine there’s value in moving an existing application to a containers-and-orchestration model, then the rest of the work begins.
Whether you are building apps from scratch or migrating, remember, this is hard, as Chris Short, Red Hat OpenShift principal technical marketing manager, told us. “None of the abstractions that exist in Kubernetes today make the underlying systems any easier to understand. They only make them easier to use,” says Short. Your teams should be expecting to learn from mistakes, he adds.
[ Kubernetes 101: An introduction to containers, Kubernetes and OpenShift: Watch the on-demand Kubernetes 101 webinar.]
We asked multiple experts for their advice on how to best navigate the migration process. Let’s dig into five tips to boost your odds of a smooth transition.
5 best practices for migrating apps to containers and Kubernetes
1. Don’t do it just because you can
First, make sure your decision to containerize an existing monolithic application and operate it with Kubernetes is a sound choice. You can repackage many monoliths in a container, but that doesn’t mean you should. Make sure your strategy aligns with your technical decision-making.
“When moving to any new platform, a team’s goals should drive how the architecture looks,” says Ravi Lachhman, DevOps advocate at Harness. “Moving to containers and Kubernetes just for the sake of doing it can actually introduce a lot more scope and technical challenges, depending on the applications and the teams running them. Migrating existing applications into a new platform with a long-term vision in mind helps teams focus on innovation and doing what’s right for the applications.”
Some teams opt to containerize a monolithic app as a means of doing a “lift-and-shift” from an on-premises environment to a public cloud. Again, though, be sure that’s actually the right choice.
“Let’s say you have a large monolithic application in your own data center but you want to move to a public cloud,” says Ken Mugrage, principal technologist in the office of the CTO at ThoughtWorks. “While you most likely can package that application in a single container and then run it in Kubernetes, it’s rarely the best way to get it to the cloud.”
Miles Ward, CTO at SADA, similarly notes that many existing workloads can be retrofit for containers without degrading the application, but there’s some work involved (more on this in a moment). Before you do that work, ensure the fit is right.
“The workloads that give us pause are those that require a disk to which they record important long-lived data,” Ward says. A hosted service or other option may make sense in some of these cases, he notes.
[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]
2. Seriously consider refactoring your application
If you’re going to migrate a monolith to containers and run it with Kubernetes, do your due diligence on refactoring the application to maximize the benefits of such a move.
“Containers can be used for ‘lifting and shifting’ existing applications, but in order to enjoy their benefits, you should consider refactoring them,” says Rani Osnat, VP of strategy at Aqua Security. “Start with simpler, smaller applications. Taking a 2GB monolithic application and containerizing it as is will create issues in deployment and runtime since the container stack wasn’t made for this use case. Try breaking up your application into smaller logical services, so that you’ll benefit from better speed of deployment, resilience, and updatability.”
Similarly, Mugrage advises seeing this less as a migration and more as a breakup. That typically entails breaking up the larger application into smaller components, aka microservices (or at least a microservice-like approach).
“Don’t think of it as moving to containers and Kubernetes; think of it as decomposing the application,” Mugrage says. (Mugrage points to this article by his colleague Zhamak Dehghani for some additional advice on doing this.) “The real gains to speed of delivery, allowing evolution of your architecture, and all of the other reasons for moving to containers and Kubernetes are realized not from that movement, but the architecture which enabled it.”
[ Why does Kubernetes matter to IT leaders? Learn more about Red Hat's point of view. ]
3. Consider whether to start with an older application
“Legacy” is sometimes a dirty word in IT, but that’s not always a fair connotation. (For starters, so-called legacy apps remain critical to the operations of many businesses, and they will for many years to come.) Here’s a specific example of how that misconception can mislead: You might be better off picking an older, but still valuable, application to migrate to containers and Kubernetes.
“You may have newer applications which are easier to maintain and update than some of your older applications,” Mugrage says. “It’s natural to decide to move these newer ones first because it may be easier. While that may be true, the benefits aren’t as great.”
Indeed, there may be a greater payoff in moving an older app that is causing operational pain, assuming you’re willing and able to follow the advice above in #2.
“Consider starting by decomposing parts of your older applications where you need to be able to react faster than you’re currently able,” Mugrage advises. You can deploy these new services as containers on Kubernetes, he says, without revamping the entire application.
What about preparing for troubleshooting? Let’s delve into that: