Frequent updatings of K8s. Is smth wrong with it?

ServerBee Blog
4 min readMay 30, 2023

--

Image by vectorjuice on Freepik

Last year, 2022, Kubernetes received three significant updates. It was the result of a large systematic work. So we got about 40 improvements per release, 123 in total by the year. Besides a large number of existing functions improvements and adding some new ones, the main focus of the K8S developers was on the optimization of its core. During this renovation process, some deprecated features have been removed. And for some projects, it became really painful.

1. Dockershim was finally removed from kubelet (in K8S 1.24)

The rejection of Docker as a container environment in Kubernetes was announced a year ago, and there were warnings in version 1.20. K8S as a rule uses unified interfaces, for example, CRI (Container Runtime Interface) for containers. Docker does not support CRI. Because of frequent support issues with Dockershim, which implements CRI for Docker containers, this component was removed. It is noteworthy that Kubernetes doesn’t need Dockershim for the operation of Docker containers, because it works directly with containerd, which is fully compatible with Docker containers and can seamlessly update without an extra abstract layer. Also, besides containerd, many recommend using the more promising cri-o, completely native to K8S and ready for production. For our clients, we use containerd for EKS, GKE, and AKS clusters, or cri-o for on-premise installations. Some large projects are still in the process of migrating from dockerd, it’s probably the biggest major change we’ve had this year. When changing the container engine, quite a lot of time is spent especially on tests of already existing deployments before we deploy them to production clusters. However, the opportunity to get more stability, speed, and security moves us and we hope to finally finish all the migrations this quarter.

2. Increasing the weight of CSI (Container Storage Interface), core optimizing, and cleaning.

From version to version, Kubernetes continues to reveal the potential of the Container Storage Interface (CSI) to connect file storage through the Persistent Volume subsystem, gradually removing legacy modules previously related to this function. Also during the year, the following options became regular:

a) CSI volume expansion — a set of improvements for expanding the size of CSI volumes (PV);

b) vSphere in-tree to CSI driver migration — migration from the vSphere storage plugin K8s code base built-in to the CSI driver;

c) Azure file in-tree to CSI driver migration — migration from the Azure storage plugin K8s code base built-in to the CSI driver;

d) Permission for Kubernetes to supply pod’s fsGroup to CSI driver on the mount — provides Pod’s fsGroup to the CSI driver as an explicit field to change the volume ownership policy during mounting.

We can see the migration to CSI continues. As we took care of it even earlier, it didn’t come as a blocker while updating worker pools for AKS to the newer k8s versions.

Another interesting update from this group in the latest 1.26 release is: many other features for CSI became stable, for example, CSI Ephemeral volumes allow to use of CSI drivers to create remote and local ephemeral volumes without using PV or PVC, defining the CSI volume directly in the Pod’s specification.

Ephemeral containers for simplifying debugging. They are temporarily located within the existing Pod for troubleshooting when you need to check some container, but you cannot use kubectl exec when the container is crashed or its image lacks debugging utilities.

Isn’t it a good reason for updating and moving forward to version 1.26?

This quick overview of only a few improvements allows you to estimate the huge amount of work done by the combined efforts of the Kubernetes team (community). You can find a detailed description and explanation for every release on the kubernetes.io blog. The main goal is qualitative improvement, rapid development, and the smoothest migration to the latest modern features. As DevOps engineers, we always study in detail the changelogs and blog posts for each k8s release to know in advance at least theoretically what we can expect in the next Kubernetes versions. Each update requires an individual approach, but after updating some of the first clusters in practice, we note all the details and nuances. Then we traditionally share the information with our entire engineering team.

Sometimes even a single change can require a significant step-by-step plan that needs to be implemented. But still, the advantages of newer versions inspire our engineers very much. They are always ready to work to make an update as maximum stable and prepare the preconditions for smoother next Kubernetes updates. From ourselves, we can recommend not to be afraid and, even more important, not to delay updating your k8s clusters. Because it is much easier to do it release to release without “jumping” through several versions.

--

--

ServerBee Blog
ServerBee Blog

Written by ServerBee Blog

We specialize in scalable DevOps solutions. We help companies in supporting critical software applications and infrastructure on AWS, GCP, Azure even BareMetal.

No responses yet