Kubernetes on Bare Metal vs. VMs: It’s Not Just Performance The New Stack – thenewstack.io

Too often, the debate about running Kubernetes on bare metal versus virtual machines is overly simplistic. Theres more to it than a trade-off between the relative ease of management you get with VMs and the performance advantage of bare metal. (The latter, in fact, isnt huge nowadays, as Ill explain below.)

Im going to attempt to walk through the considerations at play. As you will see, while I tend to believe that Kubernetes on bare metal is the way to go for most use cases, theres no simple answer.

Off the bat, lets address the performance vs. ease-of-use question.

Andy Holtzmann

Andy is a site reliability engineer at Equinix and has been running Kubernetes on bare metal since v1.9.3 (2018). He has run production environments with up to 55 bare-metal clusters, orchestrated Kubernetes installs on Ubuntu, CentOS and Flatcar Linux, and recently helped accelerate the bring-up of Equinix Metals Kubernetes platform to under one hour per new greenfield facility. Andy joined Equinix after working in senior software engineer roles at Twilio and SendGrid.

Yes, VMs are easier to provision and manage, at least in some ways. You dont need to be concerned with details of the underlying server hardware when you can set up nodes as VMs and orchestrate them using the VM vendors orchestration tooling. You also get to leverage things like golden images to simplify VM provisioning.

On the other hand, if you take a hypervisor out of the picture, you dont spend hardware resources running virtualization software or guest operating systems. All of your physical CPU and memory can be allocated to business workloads.

But its important not to overstate this performance advantage. Modern hypervisors are pretty efficient. VMware reports hypervisor overhead rates of just 2 percent compared to bare metal, for example. You have to add the overhead cost of running guest operating systems on top of that number, but still, the raw performance difference between VMs and bare metal can be negligible, at least when youre not trying to squeeze every last bit of compute power from your infrastructure. (There are cases where that 2% difference is meaningful.)

When its all said and done, virtualization is going to reduce total resource availability for your pods by about 10% to 20%.

Now, lets get into all the other considerations for running Kubernetes on bare metal versus Kubernetes on VMs. First, the orchestration element. When you run your nodes as VMs, you need to orchestrate those VMs in addition to orchestrating your containers. As a result, a VM-based Kubernetes cluster has two independent orchestration layers to manage.

Obviously, each layer is orchestrating a different thing, so, in theory, this shouldnt cause problems. In practice, it often does. For example, imagine you have a failed node and both the VM-level orchestrator and the Kubernetes orchestrator are trying to recover from the failure at the same time. This can lead to your orchestrators working at cross purposes because the VM orchestrator is trying to stand up a server that crashed, while Kubernetes is trying to move pods to different nodes.

Similarly, if Kubernetes reports that a node has failed but that node is a VM, you have to figure out whether the VM actually failed or the VM orchestrator simply removed it for some reason. This adds operational complexity, as you have more variables to work through.

You dont have these issues with Kubernetes on bare metal server nodes. Your nodes are either fully up or theyre not, and there are no orchestrators competing for the nodes attention.

Another key advantage of running Kubernetes on bare metal is that you always know exactly what youre getting in a node. You have full visibility into the physical state of the hardware. For example, you can use diagnostics tools like SMART to assess the health of hard disks.

VMs dont give you much insight about the physical infrastructure upon which your Kubernetes clusters depend. You have no idea how old the disk drives are, or even how much physical memory or CPU cores exist on the physical servers. Youre only aware of VMs virtual resources. This makes it harder to troubleshoot issues, contributing again to operational complexity.

For related reasons, bare metal takes the cake when it comes to capacity planning and rightsizing.

There are a fair number of nuances to consider on this front. Bare metal and virtualized infrastructure support capacity planning differently, and there are various tools and strategies for rightsizing everything.

But at the end of the day, its easier to get things exactly right when planning bare metal capacity. The reason is simple enough: With bare metal, you can manage resource allocation at the pod level using cgroups in a hyper-efficient, hyper-reliable way. Using tools like the Kubernetes vertical autoscaler, you can divvy up resources down to the millicore based on the total available resources of each physical server.

Thats a luxury you dont get with VMs. Instead, you get a much cruder level of capacity planning because the resources that can be allocated to pods are contingent on the resource allocations you make to the VMs. You can still use cgroups, of course, but youll be doing it within a VM that doesnt know what resources exist on the underlying server. It only knows what it has been allocated.

You end up having to oversize your VMs to account for unpredictable changes in workload demand. As a result, your pods dont use resources as efficiently, and a fair amount of the resources on your physical server will likely end up sitting idle much of the time.

Another factor that should influence your decision to run Kubernetes on bare metal versus VMs is network performance. Its a complex topic, but essentially, bare metal means less abstraction of the network, which leads to better network performance.

To dig a level deeper, consider that with virtual nodes you have two separate kernel networking stacks per node: one for the VMs and another for the physical hosts. There are various techniques for negotiating traffic between the two stacks (packet encapsulation, NAT and so on), and some are more efficient than others (hint: NAT is not efficient at all). But at the end of the day, they each require some kind of performance hit. They also add a great deal of complexity to network management and observability.

Running on bare metal, where you have just one networking stack to worry about, you dont waste resources moving packets between physical and virtual machines, and there are fewer variables to sort through when managing or optimizing the network.

Granted, managing the various networks that exist within Kubernetes, and this partially depends on the container network interface (CNI) you use, does add some overhead. But its minor compared to the overhead that comes with full-on virtualization.

As Ive already implied, the decision between Kubernetes on bare metal and Kubernetes on VMs affects the engineers who manage your clusters.

Put simply, bare metal makes operations and hence your engineers lives simpler in most ways. Beyond the fact that there are fewer layers and moving parts to worry about, a bare-metal environment reduces the constraints under which your team works. They dont have to remember that VMs only support X, Y and Z configurations or puzzle over whether a particular version of libvirt supports a feature they need.

Instead, they simply deploy the operating system and packages and get to work. Its easier to set up a cluster, and its much easier to manage operations for it over the long term when youre dealing solely with bare metal.

Let me make clear that I do believe there are situations where running Kubernetes on VMs makes sense.

One scenario is when youre setting up small-scale staging environments, where performance optimization is not super important. Getting the most from every millicore is not usually a priority for this type of use case.

Another situation is when you work in an organization that is already very heavily wedded to virtualized infrastructure or particular virtualization vendors. In this case, running nodes as VMs simply poses less of a bureaucratic headache. Or maybe there are logistical challenges with acquiring and setting up bare metal servers. If you can self-service some VMs in a few minutes, versus taking months to get physical servers, just use the VMs if it suits your timeline better. Your organization may also be wedded to a managed Kubernetes platform offered by a cloud provider that only runs containers on VMs. Anthos, Google Clouds managed hybrid multicloud Kubernetes offering, supports bare-metal deployments, and so does Red Hats OpenShift. AWSs EKS Anywhere bare metal support is coming later this year.

In general, you should never let a dependency on VMs stop you from using Kubernetes. Its better to take advantage of cloud native technology than to be stuck in the past because you cant have the optimal infrastructure.

VMs clearly have a place in many Kubernetes clusters, and that will probably never change. But when it comes to questions like performance optimization, streamlining capacity management or reducing operational complexity, Kubernetes on bare metal comes out ahead.

Feature image via Pixabay

Read the original post:
Kubernetes on Bare Metal vs. VMs: It's Not Just Performance The New Stack - thenewstack.io

Related Posts

Comments are closed.