Back to Blog

Eric Wright

Containers in Virtual Machines - Training Wheels for Next Generation Applications?

Containers have clearly become a real, and popular option for application deployment in both current and next generation data centers, including the cloud. One of the interesting debates is around whether they should be deployed directly onto bare-metal. Well, a bare-hypervisor perhaps is more appropriate. This spurs the debate around whether deploying containers on a VM is a proper way to embrace the technology.

Running Containers on Bare-Metal is Like...

If you've ever heard Joshua McKenty speak at an event, you may have heard that running a container on bare-metal is "like having unprotected sex with the internet". In other words, it is a very unsafe practice under the current conditions. It's been noted that Google and others want the additional isolation that can be provided at the VM level:



There is work underway always to attend to security challenges in the container ecosystem, and if anyone wanted to be the next unicorn startup, container security may be the right target to get you there. This is just one of the challenges that face the container ecosystem at the moment. Beyond the security challenge, there is a real advantage to VM-hosted containers for a couple of good reasons.

Moving Target - A Container Inside VM Use-Case

Imagine the ability to deploy your application inside a container as a moderate shift from the traditional applications on top of VMs directly. While this may seem counter to the reason for containers in the first place, it offers a way to bring the standalone, non-scalable application into a container deployment. By providing a containerized version of what was once a VM-centric application, it lets the developers test the waters on features and functionality of containers.

Why containers over VMs in a single container model? How about:

  • API accessible
  • Platform agnostic (at the underlying infrastructure layers)
  • Stepping stone to multi-container deployment / microservices architecture
  • Application lifecycle management

This is why I referred to it as training wheels. If you think about each of these four things I've highlighted, it is the beginning of the move towards better application architecture and development practices. All done with the preservation of the existing architecture during the initial stage of adoption.

In the same way that we have looked at the forward thinking of these four items, we also give the ability for the more rigid application model to continue during the transition. While many argue that this is a step backwards during adoption of agile practices and that it should be an all or nothing approach, it can also leave organizations stuck in the analysis paralysis stage and fearful of making change. Using a container-in-a-VM approach let's those first steps become more gentle, and more iterative.

Containers on Bare-Metal are Probably Under-provisioned

Imagine having a fresh new host and running containers directly on it, but having limited understanding of the performance of the system itself. That's the case with many of the early virtualization implementations. We saw very limited adoption as we got going, but it was considered to be better than the grossly underutilized hardware that was happening in the traditional rack-and-stack where one application ran on one server, and that was all.

As containers offer the ability to become more densely populated on the hardware layers, it may also leave newcomers to the container ecosystem underutilizing their hardware out of fear for putting too many proverbial eggs in the basket. Running a hypervisor on the hardware allows it to co-locate workloads such as virtual machines. By enabling a shared infrastructure, you gain the benefits of the isolation of containers on VMs, as well as the flexibility to run hybrid workloads.

Remember that VMs may be extremely small for the container implementations. You can run RancherOS, CoreOS, or a similar tiny OS implementation to give you the containerization option, and the security and networking team will be able to maintain the more stable practices in place that have typically wrapped around virtual machine infrastructure.

My 0.02$

Containers in a VM are pretty cool. Both as a nerd level enjoyment, and for operational practices that allow for container adoption to begin and grow. If you are able to take the initial step of adopting containers because they can run in a VM, it's the best reason in my mind.

There is a loss of the speed of booting and provisioning infrastructure in certain ways by using VMs as the underlying layer, but if you want to take the first steps rather than take no steps at all, VMs running containers is a mighty good start IMHO.

Bring on the containers, folks. Let the application freedom begin!

Image sources: (slide 8 of 45) ,