The challenges you could meet when implementing Kubernetes

ServerBee Blog
4 min readOct 18, 2023

--

Image by pch.vector on Freepik

Recently there have been lots of talks that the Kubernetes architecture is well-suited only for large IT companies (such as FAANGs) and is overkill or too complex for others.

In my opinion, the problem with Kubernetes adoption is greatly exaggerated, although not unfounded. Indeed, the implementation of K8S in the company’s infrastructure can bring many benefits but become a real challenge for the technical team. If your company supports at least several e-commerce projects, or one or more startups with extensive branches of pipeline processes that rapidly grow (IoT data processing or ML training), then without Kubernetes you will quickly feel all the drawbacks of a self-made container orchestrator for managing and control: it will be more and more difficult every day.

Kubernetes in the version of a MiniKube can be installed on almost any laptop and run the first models of your developments there. However, implementation in real production can be accompanied by many challenges for the technical team. Next, we collected several typical cases that require brainstorming from software engineers, where they can gain real valuable experience and upgrade their skills to a new level.

1) automation of databases in K8S, integration into a new paradigm

The topic of hosting and automating database management with Kubernetes still has a lot of resonance in the community. However, if we are not talking about the “business critical” segment, then the opponents of such integration are fewer and fewer. There are several reasons: more and more applications are moving from SQL-DB to NoSQL-DB, which quickly processes narrow-profile niche queries and data, and scales better; the operation of all databases, including even SQL-DB can be managed with persistent volumes for connecting external storage.

Managing DB scaling in Kubernetes is really a challenge even for experienced, qualified DevOps engineers, and for developers, it can be even more difficult. In order to integrate the database into K8S, it is necessary to change the strategy and approach to creating, for example, backups, taking into account replication and sharding schemes, think through the work of chron-jobs, create a user with certain rights for all necessary operations, attach persistent volumes to download uploads somewhere, and so on.

2) correct definition of resource limits

Gradually adding functionality to their programs, most developers who are used to dealing with virtually dedicated VPS hosting resources allocate them with some margin. By analogy with virtual machines, they do it to stay within the optimal price tariff of hosting while the functionality of the program is growing.

However, after distributing the code across containers, and running their orchestration and scaling using Kubernetes, the habit of giving a margin in resources leads to the fact that the infrastructure in the case of autoscaling takes much more resources than is necessary for the application. Optimally allocating resource limits to services based on consumption analysis and distributing containers to nodes so that they are optimally scheduled is a very important task; it often helps companies save thousands of dollars per month.

3) correct setting of sticky-session for load balancing

To reduce the loading on the server, we usually use k8s LB which is available out of the box. Then, in case of a high load, the request from the application can be randomly load-balanced to the multiple replicas. Requests of some web applications, rest API and databases are sensitive to a certain environment and don’t work when replicas are changed by the load balancer during one session. In this case, it’s better to prepare the configuration to manage sticky sessions. Thanks to the sticky-sessions mechanism, requests from applications receive a unique identifier and the balancer can associate them with the copy of the started working environment and no longer switch them to other replicas within a specified time.

Setting up a load balancer using sticky sessions can be a real challenge for a software engineer, especially in non-standard situations.

4) deployment of Kubernetes on-premise

Frankly speaking, many engineers don’t like setting up K8S on-premise and avoid it as much as possible. This is a serious responsibility for the security, availability, and stable operation of services. Everything “out of the box” will not work immediately like in cloud services and will have to be designed, created from scratch, configured, implemented, and connected manually. You should understand how the system works and control all the processes. But it’s a good solution when your project requires complete control over your cluster's design and configuration, privacy and security control, and certain hardware or software compatibility and flexibility.

I would like to note that this list of challenges in complex functional projects is incomplete, and you can share your cases in the comments. You can also always contact us and discuss the difficult points of your project, and we are ready to help you with support and advice.

--

--

ServerBee Blog
ServerBee Blog

Written by ServerBee Blog

We specialize in scalable DevOps solutions. We help companies in supporting critical software applications and infrastructure on AWS, GCP, Azure even BareMetal.

No responses yet