Back to Blog

Eric Wright

We are About to Misuse Containers, but it’s OK

The dawn of virtualization introduced traditional rack-and-stack systems administrators to the idea of being able to finally co-located workloads in a common hardware platform with proper logical segmentation. This meant that we could take the over-provisioned hardware that is often wildly under-utilized, and share the CPU, memory, storage, and network capacity in a pooled approach.

Seemed like a real shift in how we could do things in the one application per server style that we were used to. So, how did it all go?

Using New Tools to Do Old Things

The first foray into virtualization for many people was to create large virtual machines with much of the shared architecture being close to a 2:1 or 3:1 level of over-provisioning. This was the safe step, except that we were really not leveraging the virtualization approach in the way it was capable of delivering real value.

We thick-provisioned most storage quite often. We used virtual machine designs that looked almost identical to their physical server counterparts, but on larger servers which could be severed into smaller, but still large, virtual instances. This was the training wheels approach to virtualization as we built more trust in the virtualization platform.

Now we were getting confident. We used P2V (Physical to Virtual) tools to lift current physical application environments and deploy them as multi-CPU, large RAM, large storage, virtual machines. This is what we call the lift-and-shift approach.

Between lift-and-shift, and the oversized virtual machines, We were misusing the virtualization platform. But it was OK, because it led to the confidence that let us rethink the way to create and consume the virtual architecture as we learned in practice.

overlordsEnter the Containers

Containers are designed to be simple, thin, lightweight alternatives to virtual machines. A container allows for a more rapid delivery of application-oriented infrastructure that delivers faster boot times, more portability, and as a result, more agility to deliver IT.

This is where the misuse begins. As P2V and oversized virtual machines became our way to “change the way we deliver IT”, so now will the same terrible habit with containers. I’m not saying that everyone will do this, but a large number of traditional virtualization shops are finding themselves still over provisioning for “safety”. What exactly is safe about that approach?!

Small VMs are Really Big Containers

Average VMs are large in size. Average containers are tiny in comparison. This is by design. Workloads that require lots of supporting application, runtimes, libraries, data, and more, are really meant to live in VMs as more monolithic deployments. That can even be a VM that is 5 GB in size.

What we are seeing in production implementations is that VMs are most often well over 10 GB in size, and have minimum of 1 GB of RAM. In comparison, many of the production container implementations that I have witnessed, are less than 768 MB of RAM, and have ephemeral volumes ranging from 100 MB to 1 GB.

You can see how the approach is much different in sizing when the container consumer/creator is thinking in terms of horizontally-scalable and microservices architectures. Smaller, thinner footprint and more of them.

Live Long and Prosper: Vulcans Love VMs

Another challenge is that the current architectures that server up containerized infrastructure are geared towards supporting application workloads that can withstand partial outages. In other words, containers are meant to have shorter life spans, and can (and should) be able to drop without impacting the overall application availability.

Containers are designed with a horizontally scalable application design in mind. Long-running applications are not really conducive to running in containers. Why not? Let’s figure out a few of the bonuses that the container was meant to give us:

  • Thinner base system
  • Rapid delivery
  • Rapid reboots and recovery
  • Programmability for orchestration and deployment

Of those four points, none of them provide resiliency at the infrastructure layer. They really assume resiliency at the application layer because they are all designed to be able to ramp up or down quickly, and to do so programmatically.

In other words, long-running applications may be more suited to VM infrastructure alongside container infrastructure to ensure that applications are being provided the resources and architectural requirements to service their individual needs.

Containers are Coming

One way or another, container infrastructure will be a part of your IT portfolio. It may already be a part of your environment in a small way already.

Will they be production tomorrow? Yes and no. Will the be used for entire application lifecycle management? Yes and no. Will they be replacing VM infrastructure. No.

Will we bring bad VM-oriented habits forward into container infrastructure design and utilization? Yes. But that’s OK. We will learn and evolve, just like we did with virtualization.

Image sources: https://memegenerator.net/instance/69096326