If you’re asking questions about Kubernetes to learn more about the platform, security will be on your list. The good news: both the open source project and the commercial platforms that sit on top of it have plenty of strong security-related features baked in. Moreover, there’s a lively Kubernetes community with a shared interest in the ongoing security of the orchestration tool.
“The Kubernetes community has had security at the forefront of their minds from the start,” says Wei Lein Dang, VP of products at StackRox.
As with many technologies, though, the security risks tend to follow the adoption curve. So as the use of containers expands, expect Kubernetes to become an important focal point for security in containerized environments.
[ Want to help others understand Kubernetes? Check out our related article, How to explain Kubernetes in plain English. ]
Keeping security top of mind at the outset is a healthy starting point, especially before you move from a test or dev environment to running Kubernetes in production and scaling your container deployments.
It can help to simply start by asking questions: What do you need to be thinking about? What might be different from your other security strategies? And so forth. Of course, it also helps to get good answers. We asked a range of security and Kubernetes pros to share some of the common Kubernetes security questions they hear – and to share clear answers.
[ Read also: Kubernetes security: 4 areas to focus on. ]
1. How does the popularity of containers and Kubernetes change how enterprises should approach security?
You can’t necessarily just run out the same security playbook you used for your monolithic apps and/or traditional on-premises infrastructure. A move to containers – and often a corresponding shift to hybrid and/or multi-cloud environments – necessitates revisiting and likely revising your approach. The same logic applies with Kubernetes, so it helps to start by asking: What changes?
Here’s NeuVector CTO Gary Duan’s perspective:
“The move to containers and Kubernetes has really redefined the perimeters that businesses must defend. Whereas standard firewalls could safeguard traditional environments simply by keeping out external threats, threats to container environments can arrive both from external connections and via escalating lateral movements within internal container traffic.
“At the same time, the highly dynamic nature of container environments – in which some containers may only exist for moments – makes it functionally impossible to manually administer complex security policies. Therefore, protecting these environments now means recognizing a tight micro-perimeter surrounding each container workload, and applying security measures around those micro-perimeters in an automated manner.”
Red Hat security strategist Kirsten Newcomer encourages people to think of container security as having ten layers – including both the container stack layers (such as the container host and registries) and container lifecycle issues (such as API management).
2. How does everything “going Kubernetes” change my security relationship with vendors?
Eric Han, VP of product management at Portworx, applies a similar question to vendor management: In an increasingly distributed and code-defined environment, what should IT leaders be asking of their vendors and partners?
Keep an eye out as their own infrastructure and delivery models evolve.
“More and more, IT vendors are integrating their technologies with Kubernetes and leveraging its power,” Han says. “This means that IT building blocks like storage, networking, and compute no longer come from physical appliances but instead must come through integrated software stacks and containers themselves.
“Forward-looking customers are already asking, ‘How do we ensure security with this new delivery model, and how do I best partner with my chosen vendors to improve IT outcomes?’
“There’s no single answer that says ‘run this’ or ‘do that,’ Han notes. “Rather, vendors should work closely with their enterprise customers, be radically transparent, and ensure they provide actionable information. For example, a simple place to start is for vendors to provide security scan information and publish CVE feeds for their products.”
Amir Jerbi, CTO at Aqua Security, notes a subset of this question: If you’re using a managed Kubernetes service from a cloud vendor, are they handling security?
The answer is: Yes, but – you still have work to do.
“The cloud providers employ a shared responsibility model, where (in very broad terms) they secure the infrastructure, and the customer secures the application. As with other cloud services, it’s the customer’s responsibility to ensure that administrative access to the account is properly authenticated. Specifically in the case of managed Kubernetes, the cloud providers ‘lock’ the master node, which makes the cluster as a whole more secure by default,” Jerbi says.
“However, the same rule applies to the workloads themselves – if you’re running vulnerable containers in your cluster, your application will be vulnerable, and that may lead to anything from data theft to crypto-currency mining to denial-of-service attacks, so don’t assume that just because the cluster itself is properly secured, so is the application running on it.”
3. What security features are baked into Kubernetes?
Both Dang from StackRox and Jerbi from Aqua Security note the importance of understanding Kubernetes’ native security features – again, there’s already strong functionality in the tool – as well as areas where you may need to complement other tools. Moreover, there will be a learning curve for some IT pros in properly implementing the native security features.
“Kubernetes has many security features, and it takes some expertise to use them correctly,” Jerbi says. “Certainly role-based access control (RBAC) and ensuring that the API server is properly secured are key, and it’s also recommended using the Pod Security Policy to limit pod capabilities.”
Dang adds Kubernetes’ network policies for segmentation, secrets management, and the use of namespaces for isolation to the list of native features that reflect the community’s commitment to security. Again, just don’t mistake these features to mean completely secure out of the box (because, well, that’s never true).
“The challenge, of course, is understanding all these capabilities and knowing the best way to take advantage of them. In many cases, while the capabilities are native, the default configurations are not the secure settings,” Dang says. “For example, network policies are opt-in capabilities, and even when you opt into applying network policies, the default configuration is that all communication paths are open – a ‘default allow’ posture, in security-speak.”
Subscribe to our weekly newsletter.
Keep up with the latest advice and insights from CIOs and IT leaders.