When it comes to cloud-native open source projects, Kubernetes gets a lot of the press. And why wouldn’t it? This container orchestration and management platform takes containers from something an individual developer might use on their laptop to a platform that can run the largest and most demanding containerized workloads in production.
Beyond container runtimes, Kubernetes has also served as the primary center of gravity for the many other cloud-native projects that have come into its orbit. These projects have brought many additional capabilities to Kubernetes, such as performance monitoring, developer tools, serverless capabilities, and CI/CD workflows. This allows the Kubernetes project itself to stay focused on the core aspects of container orchestration. Just as Linux distributions require integrating lots of projects beyond the kernel, so too does a complete Kubernetes container platform distribution require many additional open source projects in addition to Kubernetes.
As you learn more about Kubernetes fundamentals, you’ll find a great many open source projects in the Kubernetes ecosystem: These five are widely used and provide capabilities in a number of key areas relevant to developers, operations, or both.
[ Get the free eBook: O'Reilly: Kubernetes Operators: Automating the Container Orchestration Platform. ]
An open-source systems monitoring and alerting toolkit originally built at SoundCloud, Prometheus joined the Cloud Native Computing Foundation (CNCF) as only its second project after Kubernetes. It’s a systems monitoring and alerting toolkit built on a time series data model. In addition to monitoring individual systems, Prometheus can also monitor dynamic service-oriented architectures – such as those making use of microservices.
This last point highlights one of the reasons why there are so many new open source projects clustered around Kubernetes. Cloud-native development and operations lean towards a more distributed and network-centric application approach than has been the norm with enterprise IT in the past. The tools required to support these architectures often differ from those that have grown up around monolithic applications. (Developers need to learn new skills as well.)
While we’ve focused on Prometheus here, there is a wide range of other open source projects that address various aspects of performance in a Kubernetes environment. For example, Jaeger collects traces for monitoring and troubleshooting microservices-based distributed systems. A service mesh like Istio – which provides a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies, and aggregate telemetry data – is another common technology used in large-scale Kubernetes environments. The modular nature of the Kubernetes ecosystem makes it practical to substitute or add different technology choices to optimize for different workloads or other requirements.
Kubernetes itself aims instead to provide a set of flexible abstractions, covering basic application concepts like scheduling, replication, and failover automation. But these general capabilities don’t extend to spinning up a complex application like a database cluster in which components may need to start up in a particular order, or to managing the ongoing lifecycle of that application. Therefore, it also provides an extension mechanism for more advanced or application-specific operations.
That’s where Kubernetes Operators come in. Operators encode in software the skills of an expert administrator. Go back to our database cluster. An Operator knows the configuration details. It can install a database cluster of a declared software version and number of members. An Operator continues to monitor its application as it runs, and can back up data, recover from failures, and upgrade the application over time, automatically.
Whereas Kubernetes can make sure an application comes up and has the right storage and networking, Operators allow you to automate all the administrative tasks that might be required on either day one deployment or day two ongoing operations.
Operators are a great example of the long-time practice among system administrators – more recently codified and amplified by System Reliability Engineering (SRE) practice – of automating as many tasks as possible so they don’t need to be performed manually the next time.
The simplest way to think of Knative: It allows for running serverless containers on Kubernetes. Knative takes care of the details of networking, autoscaling, and revision tracking – allowing developers to focus on core logic. Knative also provides a framework for developers to build modern apps by handling the event infrastructure needed by most serverless apps.
[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]
Arguably, the most important aspect of serverless is increased developer productivity, because so many organizations today are focused on rapid business innovation. However, while Knative is indeed focused on running serverless workloads on top of Kubernetes, it’s designed to enable a variety of containerized serverless architectures. This includes using microservices and containers in ways that are more complex than the original serverless model, which was mostly limited to simple functions triggered by asynchronous events.
Furthermore, Knative enables serverless applications that use portable services (such as those deployed via operators) that don’t require an application to be locked into a particular infrastructure or cloud provider. (Lock-in can be a consideration with native provider services.)
Let’s check out two more projects that make Kubernetes better:
Subscribe to our weekly newsletter.
Keep up with the latest advice and insights from CIOs and IT leaders.