Kubernetes is increasingly incorporating the functionalities of an
Operating System.
The term “OS” is multifaceted, and the essential functions vary depending on one’s perspective. For software and engineers, the critical functions are those that manage hardware components like the CPU, RAM, and disk. If these are being controlled through Kubernetes configurations, then Kubernetes can be considered an OS.
Established as a Server OS
As of 2024, widely used OSes include Windows, MacOS, and Linux, with Linux having numerous derivatives like Android.
These traditional OSes provide a wide range of functions to operate a single computer.
Kubernetes, however, offers the ability to manage not just a single machine but multiple ones.
In cloud services and data center setups, it’s common to have configurations that combine multiple servers. With local area networks (LAN) becoming extremely fast, distributing processing tasks across multiple servers has become a more practical way to expand processing power than merely upgrading a single machine.
At the same time, advancements in semiconductors have made even standalone machines, like personal computers, significantly more powerful. This means it is now feasible to run services, previously reserved for data centers, on a single PC.
Since Kubernetes can operate even on a single machine, it allows for the construction of a complete cloud service on a development PC — this simple success story extends directly to data center operations.
Decline of Docker
Docker is remainining an alternative at the same layer, but looking back from the future, it will likely be seen as “an implementation that existed in the early days.”
With Google’s rapid development of Kubernetes, Kubernetes reached critical mass and became the de facto standard. This is already a topic of the past, and there’s little more to say about it.
A few years ago, before Kubernetes distributions for development PCs were available, Docker was widely used in development environments.
However, as noted earlier, having the same OS for both the data center and the development machine is important. In this context, Docker is now simply an anomaly.
Evolving as a parasite on Linux
Kubernetes is implemented on Linux, and many of its features are essentially Linux itself.
Considering Linux’s history as a variant of
Unix, which has gradually replaced and unified its foundations, Kubernetes can be seen as the evolved form of Unix in the long run.
While Linux replaced Unix’s init with systemd, Kubernetes provides its own method for launching programs, diverging into a separate OS.
There was once a naming controversy over GNU/Linux, which made sense from an engineer’s perspective. In light of this, it may be more accurate to recognize the system as GNU/Linux/Kubernetes.
For instance, even now, the lingua franca of Kubernetes remains bash, continuing its lineage from Unix.
Open-source software, compared to proprietary commercial software, offers far more opportunities for replication, supplying symbiotic components to the evolving OS—much like mitochondria in cells.
Incompatibe with Windows and Mac
Kubernetes has already become the de facto OS in the realm of multi-machine computer networks.
Since Kubernetes deliberately incorporates Linux, its compatibility with Windows and Mac is limited.
That said, there are tools available for Windows and Mac, and they aim to ensure basic functionality.
However, because Linux must be implemented as a
VM, this results in a nested structure, much like a matryoshka doll. This architecture often leads to issues, particularly when attempting to use internal networks or the PC’s file system.
Problems encountered with Linux VMs on Windows or MacOS may lack productive solutions.
In the 2010s, Macs became popular among engineers, and I too used a MacBook Air at the time.
Initially, it sufficed because Unix toolchains were reasonably well integrated, but as container technologies grew in importance and incomprehensible issues increased, I eventually abandoned the Mac completely.
Replacing Linux’s Functions
The previous discussion outlined Kubernetes’ progression as a server OS for clustered configurations, and it has already passed its initial milestones.
Applications running on Kubernetes are essentially Linux applications, thus they aren’t limited to server-specific software.
It is possible to port and run a broader range of tools on Kubernetes. Since it is primarily just a security-oriented implementation, it doesn’t suffer from overheads like intermediate language runtimes, such as Java.
Although there is some effort required to port applications, the rise of container technologies in the open-source community has led to the widespread availability of Dockerfile build scripts. Many of these scripts are even provided by the development projects for various tools.
Self-destructing Tower of Babel
The main benefit of building applications specifically for Kubernetes on a single machine is reproducibility. This approach offers a solution to the problem of Linux applications that no longer run smoothly on vanilla Linux distributions.
Software often introduces incompatible updates. It’s common to encounter situations where App A and App B both require the same underlying library, but A depends on an older version while B relies on a newer one.
When there are version conflicts, one of the applications will likely fail to run.
Package managers of distributions like Ubuntu, along with their development communities, handle these incompatibilities in a unified manner. However, there are limits due to the combinatorial explosion of dependencies.
For example, I encountered an issue when trying to set up Python tools.
While Python’s package manager, pip, has long allowed users to install countless libraries, the latest Ubuntu 24.04 now prevents pip installations, instead directing users to use the system’s package manager, apt.
When depending on apt, you lose the flexibility to select specific library versions.
If an application’s dependencies are complex, you may need to seek alternative methods to manage them.
While it’s clear that using a unified package management tool reduces effort compared to juggling multiple tools, even now, some of these tools are starting to falter under the pressure.
Kubernetes apps are possible, but not yet simple
After encountering the limitations of Python package management, I decided to create packages on Kubernetes. Due to the lengthy setup process, I implemented a Tekton pipeline.
In this case, the pipeline had already been customized for development based on a version that was known to work on the server, so it operated without any issues from the start.
The container images are isolated from the host OS, and in this setup, I utilized a straightforward global installation method using pip. Using package management tools provided by programming languages like pip tends to minimize issues.
Moreover, as shown by the ability to reuse server-oriented images, container images continue to work seamlessly even when the hardware changes. Once you build an image, the traditional installation steps become completely unnecessary.
There’s a significant advantage in that you’re largely unaffected by changes to the host OS, such as during distribution upgrades.
The ability to reference directories on the host Linux is easily implemented using a simple hostPath, allowing you to handle the containers similarly to native Linux applications.
From the perspective of the completed package, it would not be incorrect to say that Kubernetes applications can already be run directly on Linux hosts.
However, as there are few tools to support the setup for using applications like desktop apps or CLIs, you will need to craft these setups yourself.
Additionally, attention is needed for scripts running inside the container.
Although they are simply bash scripts, they need to be packed within the YAML manifest.
This reflects a limitation of YAML, and it’s likely an issue that will be resolved through future extensions to the Kubernetes manifest format. However, at present, this area has not been a focus, and there has been no progress since its debut.
When using applications similarly to Linux applications, there is often a need to specify command-line options. In this case, you will need to define variables within the scripts inside the container and find a way to inject them from the environment.
Tekton provides
variable substitution, which can be specified from the CLI tool tkn. While this works, specifying lengthy parameters in tkn can be cumbersome, so I created a wrapper script
tkn-runner.
Moving forward, I hope to see the emergence of tools designed for directly interacting with containers from the host OS.
Since the container interface is unlikely to change significantly, the setups created now should remain usable in the future without much hassle.
The era of crafting silos
While containerization of applications has become the norm in data centers, the realm of desktop applications is still in development.
As evidenced by the introduction of the snap package manager in Ubuntu, there is a rational basis for containerizing client environments like PCs.
Each distribution has been working toward commonizing libraries, but the reality is that they have been unable to fully resolve inconsistencies between applications, leading to the necessity of using containers alongside them.
If customization is not required, it is still convenient to use the OS’s standard package manager.
However, if you need to craft your own environment, you can leverage the standardized processes of Kubernetes on a Linux desktop.