Last week’s KubeCon and CloudNativeCon conferences drew about 12,000 attendees to San Diego. That’s the most ever and speaks to how interest in Kubernetes, the many projects that augment and work with it, and the products based on those upstream projects continues to increase.
But all this activity comes with a price.
1. Kubernetes still feels complex to beginners
Well over half the attendees were conference first-timers. On the one hand, lots of new blood is a sign of a healthy community. On the other hand… well, I’ll let one such first-timer, consultant and industry analyst Keith Townsend, speak for himself: “I’m not shy in saying I don’t know what’s going on at this keynote. It’s not aimed at me or people like me for sure. To use a metaphor - it feels like I’ve been dropped in the middle of an industry conference like the American Medical Association. There are some words and concepts I understand, but overall I’m lost. And there are very few IT topics,” he noted on Twitter.
Enterprise distributions can help to abstract away some of this complexity by making opinionated choices about components and otherwise packaging the cloud-native ecosystem into a more consumable form.
[ Want more detail here? Read also: OpenShift and Kubernetes: What’s the difference? ]
That said, there’s a lot of rapid evolution and change going on in areas as diverse as service mesh, serverless, policy, monitoring, visualization, and more. Part of the reason is that container-centric cloud-native computing rethinks many traditional more server-centric computing models.
As a result, we’re seeing an ongoing replacement of the software that came from that server-centric world with new projects and approaches. With, of course, the expected missteps and trial-and-error learning along the way.
While all this is mostly a good thing for advancing the state of the art of computing infrastructure, the various communities involved need to think harder about gentler on-ramps for those people just getting up to speed.
[ Want a primer for yourself and others? Read also: How to explain Kubernetes in plain English and How to explain Kubernetes Operators in plain English. ]
2. It’s not (yet) about the apps
Complexity aside, though, I’d argue that there’s a relatively complete vision here for how computing infrastructure should function as we enter the 2020’s. Details will change, of course, and over the coming years we’ll likely see major new concepts introduced. But the basic new infrastructure model is in place – if unevenly deployed in enterprises so far.
Less has happened on the application front. We have concepts like microservices. We have some new languages, like Go, that were born within a cloud computing context. (Open source language Go was developed within Google.) However, from an application developer perspective, we’re arguably still writing a vast amount of code using tools that don’t directly embed the sort of emerging patterns we see in cloud-native.
There is progress on some fronts. For example, service meshes deal with the problem of secure communication among services by removing those duties from the hands of the app developer. However, in general, it seems as if there’s been far less activity aimed at helping developers write cloud-native apps better and faster – with less need for advanced programming skills – than has gone into creating infrastructure.
And this mindset shows at KubeCon as a whole. One can argue whether KubeCon is wholly an infrastructure conference. But it certainly isn’t an event that’s really aimed at application developers.
3. Operating securely is an important user concern
There were more security sessions and more security vendors present at this KubeCon than at any prior one. There’s a lot of interest in security out there, noted a cloud-native security panel made up of Liz Rice (VP open source engineering, Aqua Security), Michael Ducy (director of community and evangelism, Sysdig), Sabree Blackmon (formerly a senior security engineer at Docker), Gareth Rushgrove (director of product management, Snyk), and Maya Kaczorowski (product manager, Google security team). However, Rushgrove, who is also in the Cloud Native Computing Foundation (CNCF) Security special interest group, also said that a lot of the interest he sees in that group is in fairly basic topics, such as image scanning and general automation.
A great deal of work has gone into security around Kubernetes: I asked the panelists where work remained to be done. The group had no shortage of answers.
One area concerns defaults. Security in a project like Kubernetes, like many other projects, sometimes has default settings that may make the project easier to adopt or use but that aren’t best practice from a security perspective. (We’ve seen the same problem with containers more generally with the use of root access and other escalated permissions.) Kaczorowski said we need to consider making defaults more locked down.
[ Read also: Kubernetes security: 4 strategic tips. ]
Another security theme concerned policy management. The Open Policy Agent (OPA) is one tool that aims to simplify security policy (like Enarx for Trusted Execution Environments). As the Open Policy Agent (OPA) site states, OPA “has been used to policy-enable software across several different domains across several layers of the stack: container management (Kubernetes), servers (Linux), public cloud infrastructure (Terraform), and microservice APIs (Istio, Linkerd, CloudFoundry).”
OPA’s policy language, Rego, “manages to work for all these different domains and layers of the stack without requiring any changes or extensions to the language.”
Rego lets you write logic to search and combine JSON/YAML data from different sources to ask questions like “Is this API allowed?”
What does that mean for developers? “The key idea is that while you as an author are thinking about servers, containers, or APIs, Rego just sees JSON/YAML data,” notes the OPA site.
The panel noted that while the tools for policy management are falling into place, a lot of the actual policy isn’t written yet. More advanced network policies and policies across distributed clusters were also highlighted as areas where more work was needed.
Finally, the panelists reiterated that people still get a lot of the security basics wrong. They don’t think about their software supply chains and they run images they download from untrusted sources. They run containers as root or otherwise use permissions that open them up to security problems. Many developers are quite new to security. And in general, “security people don’t understand Kubernetes.”
Which leads back to the defaults discussion and the panel’s general conclusion that products in this space need to be more opinionated, as a way of leading possibly unsophisticated users to adopt good security practices.
[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]