Public cloud security: Follow the Goldilocks principle

Organizations worry too much about some aspects of public cloud security – and too little about others. Let's get practical
595 readers like this.
CIO Cloud

Security pervades just about every aspect of IT these days: data breaches, IoT devices, AI, containers, development pipelines and more. Ask me what’s at the top of the list of just about any IT leader’s challenges, and I’ll do my best Amazing Kreskin impression: “Security!” – and I’ll almost certainly be right.

Indeed, security is important. But I’ve been saying for a long time that “security” covers many concerns that are only loosely related. Making business and IT decisions, properly evaluating risk, and taking concrete steps requires us to tease concerns apart – so that we can better understand what we’re really worried about, what business goals we’re trying to accomplish, and what appropriate tradeoffs are. 

[ Do your hybrid cloud security plans pass the test? See our related story: Hybrid cloud security: 5 key strategies. ]

As an industry, we’re making progress. But it’s a slog. Let’s consider the state of how we think about public clouds circa 2018.

Public cloud security progress

Public cloud security discussions have certainly made progress. Sure, you can still find server huggers arguing that they’ll never use an “insecure” public cloud for anything. But they’re in the minority. Pick up just about any analyst report on multi-cloud or hybrid cloud and the message is clear: Organizations large and small are migrating at least some of their workloads from on-premises to public clouds.

But they still need to evaluate workloads. For example, organizations need to evaluate the other applications and data that workloads interact with across multiple criteria to determine if they should be migrated to a public cloud, stay where they are, or move to some other on-premises or hosted infrastructure. 

They also need to consider factors and understand issues that relate to security and apply an appropriate degree of scrutiny. Here, I find that organizations often worry too much about some things and not enough about others.

When you worry too much

Worrying too much is how we got started with the public cloud discussion. Many IT organizations put too much faith in their own ability to run a tight ship with highly skilled security experts. They would never have to deal with poor patch processes, misconfigured network equipment, insider jobs, or any design and operational errors. Well-publicized breaches have hopefully leavened such attitudes with at least a degree of humility. 

Public clouds approach security using specialized staff, automated processes, and discipline.

In all fairness, some of these attitudes also just sprang from the newness of the public cloud model. Third-party auditors and regulators also weighed in with often-appropriate caution. But it’s now generally accepted that public cloud infrastructures, properly used for an organization’s needs, can provide sufficient security attributes. For one thing, the nature of public clouds is that they approach security using specialized staff, automated processes, and discipline (which is not to say that enterprises don’t, but it’s by no means a given).

When you worry too little

At the same time, skepticism about public cloud security sometimes seems to be giving way to an attitude that running workloads on public clouds makes all security someone else’s problem. Nothing could be further from the truth. Certain layers of security become the responsibility of the cloud vendor, an approach referred to as a “shared responsibility” model. But it’s critical to be crystal clear on the aspects of security where you’re still on the hook. [ Read also: What is different about cloud security. ] 

Here are some examples. I’ll limit this discussion to Infrastructure-as-a-Service, but the same basic principles apply to any type of cloud service. Even if you’re just using Software-as-a-Service, appropriate identity and authorization controls are still ultimately on you.

Many practices, especially at the operating system level and above, don't (or shouldn't) change in a public cloud. One such practice is to obtain software from known, trusted sources. In addition to using certified software images whether on-premises or in a public cloud, it is equally important to maintain those images throughout their lifecycle. One effective means to accomplish this is to use public cloud providers that have put in place back-end services that can provide timely updates to relevant software patches and install them as necessary. 

When you add in containers

Containers extend this model. There are many layers to container security. They start with making use of the multiple levels of security available in Linux. Linux namespaces, Security-Enhanced Linux (SELinux), Cgroups, capabilities, and secure computing mode (seccomp) are five of the security features available for securing containers. (For more background on the layers, and how orchestration tools such as Kubernetes fit in, see this whitepaper: Ten Layers of Container Security. )

As with the operating system and applications, you need to know where container images originally came from, who built them, and whether there’s any malicious (or simply out-of-date) code inside them. In many cases, enterprises use internal code repositories for maximum control.

Containers should also be considered as part of the broader build system and the entire set of DevOps processes. Managing this build process is key to securing the software stack. By adhering to a “build once, deploy everywhere” philosophy, you ensure that the product of the build process is exactly what is deployed in production. It’s also important to maintain the immutability of your containers: In other words, do not patch running containers; rebuild and redeploy them instead. (See our related article: Why DevSecOps matters to IT leaders. )

To sum this up, don’t think of public cloud security as something unique and special – something that lies outside overall IT security and best practices. Rather, understand which responsibilities you’re willing to pass off to a provider, which you still own, and then properly manage that combination of internal and external services.

Want more wisdom like this, IT leaders? Sign up for our weekly email newsletter.

Gordon Haff is Technology Evangelist at Red Hat where he works on product strategy, writes about trends and technologies, and is a frequent speaker at customer and industry events on topics including DevOps, IoT, cloud computing, containers, and next-generation application architectures.