Containers and orchestration make up a growing part of IT’s present and future. The majority of IT leaders surveyed in Red Hat’s 2021 State of Enterprise Open Source report said they anticipated increasing container usage in their organizations during the next 12 months: 30 percent expect a significant increase, and 42 percent expect a slight increase. Kubernetes adoption is rising alongside that trend.
Containerization and orchestration also overlap two other key trends right now: Application migration and application modernization. Migration typically refers to moving workloads from one environment (today, usually a traditional datacenter) to another (usually a cloud platform.) Modernization, often used as an umbrella term, refers to the various methods of migrating applications to a cloud environment. These run the gamut from leaving the code largely as-is to a significant (or total) overhaul in order to optimize a workload for cloud-native technologies.
Red Hat technology evangelist Gordon Haff notes that this spectrum bears out in Red Hat’s enterprise open source research: The 2020 report found a healthy mix of strategies for managing legacy applications, including “leave as-is” (31 percent), “update or modernize” (17 percent), “re-architect as cloud-enabled (16 percent), and “re-architect as cloud-native” (14 percent).
“Leave as-is” is self-explanatory. “To whatever large degree containers are the future, you don’t need to move everything there just because,” Haff says.
But as Red Hat’s research and other industry data reflect, many organizations are indeed containerizing some workloads. Containers can play a role in any of the other three approaches to legacy applications.
Migrating Java workloads to containers: 3 key questions to ask
What happens when you add another variable to big-picture discussions of containerization and app modernization, such as a particular programming language? Take Java: It qualifies as a “legacy” language by some definitions, yet it remains widely used in Fortune 500 organizations’ enterprise software plans. When IT teams evaluate their existing application portfolios for cloud and containerization fit, Java workloads are often part of that mix.
On one level, containerizing Java workloads is not much different than other migration or modernization projects: Some applications are better fits than others. Still, there are some Java-specific issues to consider as well. Let’s look at three things to keep in mind when migrating Java workloads to containers.
1. Is your Java workload a fit for containerization?
As Haff notes, “just because” is not a productive goal for containerization. If something’s not broken, why are you fixing it?
That said, there are plenty of reasons to containerize, and plenty of workloads that qualify as “fit” for containerization. Vladimir Sinkevich, head of Java development at ScienceSoft, says that many server-side Java applications are good candidates for containerization.
“Some features are more difficult to containerize [such as] integration with specific hardware like GPU or serial port devices, but containerization [can] still [be] worth it,” Sinkevich says.
Your evaluation process also should include determining the best migration path forward – as represented by that spectrum of strategies covered in the aforementioned Red Hat report. While there is a lot of blanket advice around application migration and modernization, the reality is this decision must be made in the context of your specific applications and teams. There are some philosophical differences of opinion in the industry about the “right” strategy, say migrating workloads largely as is (often referred to as “lift and shift”) and modernizing later, vs. modernizing sooner.
Rohan Kumar, project lead for Eclipse JKube, an open source tool that helps move existing Java applications onto Kubernetes and Red Hat OpenShift, points to the growing popularity of microservices architecture in cloud-native development.
“Evaluating whether our workload is a good fit for containerization is pretty much the same as evaluating whether our workload is implementing microservices architecture principles,” Kumar says.
App migration and modernization experts commonly note that speed is one of the determining factors when deciding on a migration path, though it’s certainly not the only one. A team operating under a “get to the cloud ASAP” executive mandate, for example, might be more inclined to opt for the “as is” approach. A team that has time to (re)design) and (re)build their application in cloud-enabled or cloud-native fashion might unlock more value by going with one of those routes.
“Microservices architecture provides valuable principles and practices for designing [and] changing distributed applications,” Kumar says. “Even if you migrate monolith workloads to containers, they won’t be able to take full advantage of container-native capabilities like horizontal scaling and high availability.”
2. Do you understand the versioning, CPU, and memory implications?
All three of these implications come into play when containerizing Java workloads, according to Daniel Hinojosa, Java instructor at DevelopIntelligence – and often at the same time.
“We have to ensure that the right versions are being used to develop the application when it is containerized,” Hinojosa says. “One of the issues when it comes to Java is that older JDK versions aren’t ‘container aware,’ since some containers have memory and CPU caps. Application developers should be aware of the limitations of where they are deploying those containers, and should understand some of the JDK switches that can be utilized when used by [a] container runtime.”
Just as a tool like JKube can help modernize Java workloads for cloud-native platforms, tools can help address Java-specific issues like CPU and memory. In general, you’ll need to ensure you’ve accounted for the various runtime dependencies over your Java workloads as you containerize them.
“When migrating Java-based workloads to containers, it’s really important to identify various runtime dependencies of your application, [such as] JVM, databases, et cetera,” Kumar from JKube says. “These dependencies would act as different layers when packaging Java applications into a container image. You would need to plan out [everything from] how to start the application to network management.”
3. What benefits do you want to achieve?
Once more for the folks in the back: “Just because” isn’t a containerization strategy. Containerizing Java workloads – as with most other application types – should serve particular technical and/or business goals.
This also should inform your choice of migration path. If it’s not broken, don’t fix it. If it is broken, well, you should probably fix (or replace) it. Hinojosa notes that this consideration in particular applies to (re)architecture decisions.
“One piece of sage advice is: ‘If your product didn’t work in the monolith, it will not work as a microservice,’” Hinjosa says.
There are various benefits of containerization that become tangible drivers for a migration project. The important thing is that you and your team understand the “why” behind the effort.
“Containers help in many ways – both from the software engineering and DevOps perspective,” Sinkevich says. “They make it easier to deploy a new software version, roll back if something goes wrong, [and] scale and migrate across different environments. Containers also introduce an extra level of security, and it’s easier to comprehensively test a containerized application for vulnerabilities.”
5 benefits of migrating Java workloads to containers
Kumar describes in depth five key benefits that can be attained by migrating Java workloads to containers. Each is an example of value that can be translated into a strategic goal.
1. Consistency: “When you containerize your workloads, you take a binary snapshot of your operating system, the JVM, and other dependencies and package them into a container image,” Kumar says. “From developers’ perspective, all of this is guaranteed to be the same no matter where the workload gets deployed. This increases the productivity of the DevOps team as well since time debugging differences in environments while resolving failures after upgrades.”
2. Resource utilization: “Containers require fewer system resources than traditional virtual machine environments because they virtualize CPU, memory, and network resources at the operating system level,” Kumar says.
3. Portability and flexibility: “Containers can be run virtually anywhere, hence allowing applications to be deployed easily to multiple different operating systems and hardware platforms,” Kumar says.
Similarly, this is why containers and orchestration are common strategies for managing hybrid cloud and multi-cloud environments.
4. Scalability and autoscaling: Kumar notes that a VM requires starting an entire OS before any work can be done. Since containers are abstracted away from the host OS, they can start and stop in seconds. (Need a refresher? Read also: Kubernetes autoscaling, explained.)
“This allows better scalability, making it possible to create more replicas of containers as per requirements,” Kumar says. “Container orchestration platforms like Kubernetes allow you to specify replication policies, then scale up and down as required.”
5. Development and deployment velocity: Containerization can be a key part of delivering new features and updates faster and more frequently. It also pairs well with CI/CD pipelines.
“Fine-grained containers enable you to achieve rapid cycle changes,” Kumar says. “Using containers as a unit of deployment makes CI/CD workflows uniform and frictionless.”
[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ]
What to read next
Subscribe to our weekly newsletter.
Keep up with the latest advice and insights from CIOs and IT leaders.