What Clarence Birdseye can teach us about container security

You trust your supermarket to use partners with reliable supply chains for frozen foods. Ask the same of your container sources
770 readers like this.

What do frozen peas and software have in common? If you’re working with containers, it’s worth thinking about their associated supply chains with Clarence Birdseye’s lessons in mind.

Clarence Birdseye is generally considered to be the founder of the modern frozen food industry. In 1925, after a couple of false starts, he moved his General Seafood Corporation to Gloucester, Massachusetts. There, he used his newest invention, the double belt freezer, to freeze fish quickly using a pair of brine-cooled stainless steel belts. This and other Birdseye innovations centered on the idea that flash-freezing meant that only small ice crystals could form, and therefore cell membranes were not damaged. Over time, these techniques were applied to a wide range of food — including the ubiquitous frozen peas.

Frozen food also depends on a reliable supply chain between the original source of the food and the consumer, in order to maintain the right temperature for the package. Double belt freezers aren’t much use if the fish spoils before it can be frozen. Nor does the task of food preservation end when a truck leaves the factory loading dock. Packaging and supply chains need to reliably protect, secure, and preserve the overall integrity of contents until they’re in a consumer’s hands and even beyond.

[ How and why are companies using containers to improve the dev process? See our related article, 5 advantages of containers for writing applications. ]

In the software world, the packaging of applications and services can likewise protect and secure their contents throughout their life cycle.

Security gets distributed

Historically, security has often been approached as a centralized function. An organization might have established a single source of truth for user, machine, and service identities across an entire environment. These identities described which information users were authorized to access and which actions they were allowed to perform.

Today, the situation is often more complicated. It’s still important to have access control policies that govern user identities, delegating authority as appropriate and establishing trusted relationships with other identity stores as needed. However, components of distributed applications may be subject to multiple authorization systems and access control lists.

For example, real-time monitoring and enforcement of policies make it easier to address performance and reliability issues before the problems become serious. They can also detect and mitigate potential compliance issues.

Securing for the whole life cycle

Automation reduces the amount of sysadmin work that is required. However, it’s also a way to document processes and reduce error-prone manual procedures. After all, human error is consistently cited as a major cause of security breaches and outages.

Operational monitoring and remediation needs to continue throughout the lifecycle of a system. It starts with provisioning. As with other aspects of ongoing system management, maintaining complete reporting, auditing, and change history is a must.

The need for security policies and plans doesn’t end when an application is retired. Data associated with the application may need to be retained for a period, or personally identifiable information (PII) may need to be scrubbed, depending upon applicable regulations and policies.

Cloud-native environments: Special demands

With traditional long-lived application instances, maintaining a secure infrastructure also meant analyzing and automatically correcting configuration drift to enforce the desired host end-state. This can still be an important requirement.

Consider the role that large numbers of short-lived instances play in cloud-native environments.

However, with the increased role that large numbers of short-lived instances play in cloud-native environments, it’s equally important to build in security in the first place. For example, you may establish and enforce rule-based policies around the services in the layers of a containerized software stack.

Taking a risk management approach to security goes beyond putting an effective set of technologies in place. It also requires considering the software supply chain and having a process in place to address issues quickly.

For example, it’s important to validate that software components come from a trusted source. Containers provide a case in point. Their very simplicity can turn into a headache if IT doesn’t ensure that all software running in a container comes from trusted sources and meets required standards of security and supportability.

What’s in that container?

It’s much like a large and busy port with thousands of containers arriving each day. How does a port authority manage the risk of allowing a malicious or illegal container into the port? By looking at which ship it arrived in and its manifest, by using sniffer dogs and other detection equipment, and even by physically opening and scanning the contents.

The verification of shipping container contents is a serious public policy concern because many inspection processes are largely manual and don’t scale well. Fortunately, verifying the contents of software containers and packages is more amenable to automation and other software-based approaches.

Most of the vulnerable images in public repositories aren’t malicious; nobody put the vulnerable software there on purpose. Someone just created the image in the past, and then, after it was added to the registry, new security vulnerabilities were found. However, unless someone is paying attention and can update those images, the only possible outcome is a registry that contains a large number of vulnerable images.

If you just pull a container from one of these registries and place it into production, you may unwittingly be introducing insecure software into your environment. You trust your supermarket to use partners with reliable supply chains for their frozen foods. You should expect the same of your container sources.

Security meets DevOps workflow

Many software vendors help secure the supply chain by digitally signing all released packages and distributing them through secure channels. With respect to containers specifically, the Red Hat Container Registry lets you know that components come from a trusted source, platform packages have not been tampered with, the container image is free of known vulnerabilities in the platform components or layers, and the complete stack is commercially supported.

Incident response goes well beyond patching code. However, a software deployment platform and process with integrated testing is still an important part of quickly fixing problems (as well as reducing the amount of buggy code that gets pushed into production).

A CI/CD pipeline that is part of an iterative, automated DevOps software delivery process means that modular code elements can be systematically tested and released in a timely fashion. Explicitly folding security processes into the software deployment workflow makes security an ongoing part of software development — rather than just a gatekeeper blocking the path to production.

After all, you need to keep those vegetables coming down the line.

This post is adapted from the recent book "From Pots and Vats to Programs and Apps: How Software Learned to Package Itself" by Red Hat’s Gordon Haff and William Henry. Want the book? You can download a free PDF.

Gordon Haff is Technology Evangelist at Red Hat where he works on product strategy, writes about trends and technologies, and is a frequent speaker at customer and industry events on topics including DevOps, IoT, cloud computing, containers, and next-generation application architectures.