We won’t recommend them to you. Five DevOps tools with significant drawbacks or vulnerabilities
Today I want to talk about some bad choices and common mistakes in DevOps practice. I hope it can help save money and mental health for those planning to deploy container infrastructure in the near future. We made a selection of 5 DevOps tools that have significant drawbacks and we won’t recommend them to you. So let’s go.
Docker swarm
Despite a simple installation and configuration of container clusters with Docker Swarm, this orchestrator gets many complaints from DevOps engineers. Are they fair? You are welcome to share your opinions in the comments.
- Poor load balancing and lack of autoscaling lead to unstable infrastructure performance during rapid load growth. It is almost impossible to automate the horizontal scalability of the infrastructure with Swarm LB without patching, and vertical scaling significantly increases the budgetary costs of the infrastructure. In a few words, its own LB is completely unstable and worked only a bit only the specific version and distro;
- Only a few ecosystem solutions compared to other orchestrators. Using Docker Swarm, you constantly have to extend its functionality, almost manually, while other orchestrators already contain it or extensions can easily add it. And with Docker Swarm, it is often not a trivial task. Prepare your rasp!
- Today, Docker Swarm has turned into a vendor-locked solution. When you work within the resources of its cloud (docker cloud) and don’t need a particularly wide functionality, your project can work for some period of time without any issues, but when you want to use the functions or resources of another cloud (not docker cloud) with Docker Swarm, it can be a serious problem.
In addition, working with this orchestrator you’ll meet a large number of bugs. And it is not only our experience. Many of our partners and customers asked us for help with such issues, so we can’t recommend it for your projects. But at the first opportunity, we advise you to migrate to another orchestrator, for example, Kubernetes.
Nagios as Kubernetes and containers monitoring tool
Nagios is a favorite and familiar tool of many sysadmins to maintain the performance of static computer hardware and different types of machines. However, it is not designed to monitor the container’s infrastructure and the dynamics of its processes. Of course, you can implement it in Kubernetes, but it won’t help you much, because it can’t monitor container levels automatically. You can develop your own ways of controlling services, but you’ll have to update the static configurations manually after each automatic change in the dynamic infrastructure of containers. In changing load dynamics, in the process of autoscaling in a Kubernetes cluster, it simply does not make sense to maintain the relevance and performance of data collection through Nagios! So we recommend more effective and reliable tools, such as Prometheus/ Grafana, Influxdb, Telegraf, that have already long been used to monitor container-based infrastructures.
SSH servers inside containers
Another bad habit that has to go to the past of container infrastructure is the widespread adoption of SSH access into containers. When you use VPS hosting, remote or virtual machine, an SSH server allows you to reach the command shell to access data, edit configurations, install and manage services. But does it make sense in the cloud, where data, configurations, and instances are located separately from each other? Would you need SSH access to an instance shell where only 1–2 processes are running? If there is something wrong with them, the orchestrator will detect it and simply kill them by launching new copies. In addition, an implementation of an SSH server to an instance, of which there may be several dozen in the infrastructure, makes the infrastructure extremely vulnerable to hacker attacks and reduces the overall security level.
Bash as a tool for IaC
There are many, simple and not so, ways to describe the container infrastructure as code, for example, Terraform, Ansible, Chef, Salt. Everyone chooses the most convenient for him: declarative or one of the imperative programming languages. But if someone decides to do it with a Bash script, then we are ready to admit — you definitely can’t call such a person lazy:) Just imagine the amount of written configuration code that will have to be run, edited, debugged, etc. every time. Is it really worth spending too much time on it? You better use it to learn HashiCorp’s declarative HCL for Terraform which is rather simple and convenient. Or if you come from the developer’s world then Pulumi will be very interesting for you.
Windows containers in Kubernetes
Recently, Microsoft is trying to follow open-source technologies and offer its users containerization, for example, for the development of .NET technologies. They work intensively on the implementation of Windows containers in Kubernetes, but compared to the possibilities of open-source solutions, they are not so functional and flexible. These are the reasons why:
- You must select the types of licenses for the versions of the systems attentively. Not all versions of Windows and licenses allow you to work in the required direction, for example, with the compact and simple Windows Nano Server, you will not be able to develop under the .Net Framework;
- Cloud costs will grow due to the size of Windows image containers and the required software;
- There also appear to be license software costs;
- Kubernetes works in a Linux environment and you can’t use Windows for your control plane. Also ensuring the performance of Windows containers will always increase the system requirements for infrastructure configuration.
Still, we can’t recommend Windows-based solutions in cloud infrastructure containers for the reasons mentioned above.