Kubernetes deployments: 6 security best practices

Kubernetes deployments: 6 security best practices

Experts share tips on improving the overall security of your software development and deployments – at the same time you increase speed and scale using Kubernetes

up
25 readers like this
Kubernetes architecture for beginners

4. Don't: Manage your cluster manually

Automation is intrinsic to Kubernetes; it’s similarly critical to securing your Kubernetes deployments. It may be tempting to do quick fixes and other changes in a manual fashion, but it’s often not a good idea from a security and reliability standpoint.

[ Related read: Managing Kubernetes: 7 things you should understand. ]

This process can quickly lead to technical debt and drift.

“You definitely should not be managing the state of the cluster ad hoc or manually,” Allen says. “It is very easy to edit deployments and services to make quick changes, especially for small things like environment variables or container images/tags, but this process can quickly lead to technical debt and drift as there will be uncertainty as to what the state should be. If another person comes along and is not aware of the manual changes and makes additional changes, it will only cause further confusion and potential outages.”

5. Do: Finely tune the security-related "knobs" within your environment

As we’ve covered previously, Kubernetes comes with plenty of rich features for security and reliability. The key is to ensure you’re properly configuring them and managing them over time. Just as with other kinds of platforms, some security threats arise out of misconfigurations or a failure to properly harden the environment. As we noted recently in this story, examples in Kubernetes include leaving sensitive ports (namely, 10250 and 10255) exposed or not setting up role-based access control (RBAC). The CIS Kubernetes Benchmark is one starting point for performing a checkup on your environment.

“Administrators can secure the infrastructure by providing models to audit access to nodes and secure the network and persistent volumes,” Pabon from Portworx says. He shares a few example steps Kubernetes administrators can take:

  • Limiting the use of privileged containers to only when absolutely necessary: This essentially gives a container – or anyone with access to it – full privileges within the node, which increases your blast radius. “Kubernetes provides Pod Security Policy objects as a model to manage access to who can deploy containers with this elevated node access,” Pabon says.
  • Ensure appropriate logging: “Accessing a Kubernetes node using ssh should go through an audit server to log all access to the nodes, and all commands executed on the server should be logged,” Pabon says.
  • Provide network isolation for tenants based on Kubernetes Network Policies and an appropriate CNI component: “This ensures services in each namespace are accessed according to the ingress and egress rules set in the Network Policies, further isolating and securing tenant services from each other,” Pabon explains.
  • Consider a service mesh technology (like Istio) that can enable teams to automate TLS and authentication/authorization services for their applications.
  • Finally: “Administrators should also integrate a storage system which provides Kubernetes with not only encrypted volumes but also authentication and authorization of persistent volume access,” Pabon says. “A secure storage system extends the security model beyond Kubernetes normal namespace isolation by ensuring only the authenticated tenant has the appropriate access to the volume with or without Kubernetes.”

6. Don't: Rely on default configurations

Conversely, these configurations aren’t optimized for your security posture by default, at least not with the underlying open source project. (This is one area where commercial platforms and providers can help.)

“What companies cannot afford to do is to simply rely on default configurations for deployments, because they are optimized for operational success and not security,” Dang says.

“For example, by default, network segmentation policies are not applied on deployments and their pods in Kubernetes, so every asset can talk to every other asset,” he explains. “This default setting is great for getting a developer quick success in building an app, but leaving these default settings in place in production will mean the organization is more vulnerable because the blast radius is much bigger than necessary.”

[ Kubernetes 101: An introduction to containers, Kubernetes, and OpenShift: Watch the on-demand Kubernetes 101 webinar.]

Pages

No comments yet, Add yours below

Comment Now

7 New CIO Rules of Road

CIOs: We welcome you to join the conversation

Related Topics

Submitted By Rich Theil
May 28, 2020

Who is your digital transformation really serving - your customers, or your shareholders? Use these exercises to determine where and how to shift focus

Submitted By Eveline Oehrlich
May 28, 2020

When you bring up Robotic Process Automation, many employees still think "job loss." Here's what to know as you shape RPA plans - and deal with team fears

Submitted By Kevin A. Haskew
May 28, 2020

Want to refocus your roadmap and move faster? Here's how we're reviewing which priorities should be added, accelerated, or reconsidered as we plan for the future.

x

Email Capture

Keep up with the latest thoughts, strategies, and insights from CIOs & IT leaders.