How to approach infrastructure automation

You can argue that prior to containers, infrastructure automation was something of a band-aid. But today, IT can pursue this goal more realistically. Let's explore three fundamentals and advice on getting started
252 readers like this.

Standardization and automation are certainly not new concepts in IT. So why is infrastructure automation such a hot topic right now?

The short answer: Containers, orchestration, and other modern technologies have enabled an evolution of infrastructure automation capabilities that, among other benefits, let standardization actually be enforced.

“One can argue that, prior to containers, the standardization of infrastructure and its automation was something of a band-aid,” Red Hat technology evangelist Gordon Haff explains. “Sure, we had standard operating environments (SOEs) and configuration management tooling that automated the provisioning of those SOEs and their ongoing monitoring. But there were still a lot of ‘snowflake’ servers needed for particular tasks, and deployed images could still drift over time even if configuration management software tried to keep them in compliance.”

With containers and immutable infrastructure, deployed instances can't drift. So infrastructure automation isn't new. But many of the ways we go about doing it today are.

That took some wind out of the proverbial sails in a long-running pursuit of effective infrastructure automation. Containerization and orchestration have reinvigorated the journey.

“With containers and immutable infrastructure that changed – now deployed instances couldn’t drift,” Haff says. So, no: “Infrastructure automation isn’t new. But many of the ways we go about doing it today are.”

[ Read also: 5 open source Kubernetes projects to watch in 2021 and 8 automation trends to watch in 2021. ]

3 fundamentals to modern infrastructure automation

We’re here to cover three overlapping fundamentals that underpin modern approaches to infrastructure automation. Consider it a primer aimed primarily at IT leaders and teams still getting up to speed on the key concepts behind new ways of approaching infrastructure automation using cloud-native technologies – especially useful in any hybrid cloud or multi-cloud scenario – with some advice on getting started.

1. Key concept: Containers and orchestration

While the cloud-native ecosystem is already massive (and still growing), when we talk about infrastructure automation in this context, there are two major pieces to keep top of mind: containers and Kubernetes. (There are multiple container runtimes and multiple orchestration options, but for the latter, we’ll use Kubernetes as the default choice since it has become the clear-cut leader in container orchestration, which is a critical component of modern infrastructure automation.)

[ How can automation free up your staff's time for innovation? Get the free eBook: Managing IT with Automation. ]

Immutable infrastructure

This infrastructure, once deployed, is never changed in production. Instead, it's replaced with a new version as needed.

These cloud-centric technologies paved the way for what Haff mentioned earlier: immutable infrastructure. This means infrastructure that, once deployed, is never changed in production – instead, it’s replaced with a new version as needed. And it can be spun up (and down) automatically using tools like Kubernetes, which allows administrators to declare desired states for their applications and infrastructure that the orchestration platform then manages in a highly automated fashion. It is the literal manifestation of infrastructure automation, from provisioning and resource allocation to deployments and more.

[ Need to explain Kubernetes' benefits to non-techies? Read also: How to explain Kubernetes in plain English. ]

Microservices architecture

Another key concept here is microservices architecture, which essentially means breaking down an application into smaller, discrete components that work together as part of the larger system. Among other benefits, microservices allows teams to manage those smaller services independently, rather than having to go back into (and redeploy) the entire application every time a change is necessary. Microservices pair very well with containers, in that each service can be containerized independently. It should be noted that not every existing application makes a great fit for microservices architecture, and that’s OK.

Some advice from folks who’ve been there: If you’re starting from a largely monolithic application portfolio, don’t approach infrastructure automation as something that will be a short-term project. Instead, think of it as a piece-by-piece process, especially if you’re breaking existing applications into microservices.

“The road to full immutable infrastructure can take time, especially for organizations that have applications that pre-date the proliferation and popularity of container-based applications,” says Michael Fisher, group product manager at OpsRamp. (Editorial voiceover: That means most organizations.) “However, this does not mean that architecture planning and development are at a standstill until the whole application has been configured to run on standalone micro-frontends and backends. Teams should prioritize and containerize services iteratively until the entire application is transitioned.”

There is no flip to switch from traditional infrastructure management to automation. So there’s no value in approaching infrastructure automation with this unrealistic goal in mind.

Rather, modern approaches to infrastructure automation depend on some corresponding shift toward cloud platforms and tools. But you don’t need to get there overnight.

"To understand what to containerize, you need to step back and understand the core services and building blocks of your application."

At the outset, Fisher sees this stage as a creative process as much as a technical one: “To understand what to containerize, you need to step back and understand the core services and building blocks of your application.” 

There are many perspectives on the right path to containerization, especially if you are also refactoring an application (or multiple applications) as microservices. Fisher’s a fan of starting on the front-end and working your way down the stack from there.

“One of the best ways to approach this is to understand where your end users most frequent[ly visit] in the UI/UX, and move down the stack,” Fisher says. “This approach is often referred to as ‘micro-frontends,’ or the front-end analog of back-end microservices. Once you have an understanding of what needs to be containerized, there are a plethora of tools to help with the horizontal scaling of the infrastructure that run the services – Kubernetes being the most popular.” (We’ll get back to tools in a moment.)

2. Key concept: CI/CD, build pipelines, and build artifacts

Of course, this doesn’t all happen by magic. Even once you’ve begun containerizing suitable workloads and learning Kubernetes or using a commercial Kubernetes platform, there’s still work to be done.

[ Read also: OpenShift and Kubernetes: What's the difference? ]

With immutable infrastructure, you need to stop thinking in terms of traditional terms like servers, per se, even though they’re still technically relevant. Rather, you want to start thinking in terms of build pipelines and what comes out the other side: build artifacts. The latter is what you’re automatically deploying, retiring, and/or replacing with immutable infrastructure.

CI/CD has become a critical practice and set of tools in this regard; the build phase is essentially the foundation of a robust CI/CD pipeline.

[ Related read: How to set up a CI/CD pipeline. ]

Essentially, a CI/CD pipeline is how containerized applications travel from code to repository or production with as little human effort as possible.

The pipeline concept, in general, is a useful way of thinking about infrastructure automation: Once it’s in place, your code and everything it will require to run properly should move through each phase of the pipeline – from build to test to security to deployment – in a highly automated manner, with people actively in the loop only at steps you’ve specified or when something is not up to standards (which you’ve also specified). 

Essentially, a CI/CD pipeline is how containerized applications travel from code to repository or production with as little human effort as possible. (Don’t kid yourself, though: Talented people are still required to make this work.)

Jesse Stockall, chief architect at Snow Software, explains some important approaches to managing containerized applications and the immutable infrastructure. These speak to how containers and orchestration prevent that kind of drift or snowflake deployments that were still possible (if not probable) even in SOEs, among other benefits.

“Container images should be built from trusted base containers using a repeatable, automated build pipeline that uses a private image repository for the build output,” Stockall says. “For added control, the base images can also be copied to the private registry and access to public registries blocked. The build system should also detect when newer versions of base images are available so that the changes can be vetted and the image configuration updated.”

Other key elements of a CI/CD pipeline include testing and validation/compliance. Done right, security is more integral to each of these stages (instead of being a last-minute check).

“Your container registry should perform scanning for known vulnerable software and block non-desirable images from being uploaded,” Stockall says. “A linter or static analysis of the image configuration and deployment manifest should be used to detect common misconfigurations and omissions such as a missing version from the base image or missing resource limits for a deployment.”

In news that should surprise no one, there’s effort involved in getting this set up. Just as Fisher advised earlier in terms of containerization and microservices, don’t get bent out of shape trying to make this happen all at once.

3. Key concept: Cloud-native tools

One project or platform feeds off another, especially when they are open source. This creates a snowball effect for infrastructure automation.

We’ve touched on tools, but this is its own concept: While some principles remain the same, monolithic tools and processes won’t necessarily get you where you want to go in terms of infrastructure automation.

It’s kind of like security: If you’re just using the same perimeter firewall and endpoint antivirus software that you were running a decade ago – and haven’t updated the playbook at all – well, good luck with all that.

The same applies to infrastructure. We’ve evolved into a hybrid cloud world that accounts for both cloud-native development as well as many workloads more suited for private cloud and bare-metal infrastructure. And there’s an abundance of established and emerging tools to help manage it all.

“This sea change has led to a rethinking of infrastructure automation,” Haff says. “Kubernetes has become the standard for container orchestration. This has, in turn, led to a variety of automation tools that are specifically designed for a containerized world.”

The word “ecosystem” tends to be used loosely in the tech world. But it’s living up to its definition when it comes to infrastructure automation in the cloud age, from build to security to deploy. One project or platform feeds off another, especially when they are open source. This creates a snowball effect for infrastructure automation.

“Projects in the CI/CD space are rethinking build and deployment pipelines in the context of Kubernetes-native development patterns and processes,” Haff says. “This includes Tekton Pipelines, as well as newer projects specifically focused on deployment automation, like Argo CD and Keptn. We also see many new security tools, such as from Aqua and Snyk, that are optimized for this relatively new type of infrastructure.”

[ How can you automate more? Get the free eBooks: Getting Started with Kubernetes and O'Reilly: Kubernetes Operators: Automating the Container Orchestration Platform. ]

Kevin Casey writes about technology and business for a variety of publications. He won an Azbee Award, given by the American Society of Business Publication Editors, for his InformationWeek.com story, "Are You Too Old For IT?" He's a former community choice honoree in the Small Business Influencer Awards.