Containers provide a lightweight alternative to virtual machines that isolate the application from wherever it’s running. This way, developers can install only what they need to run the application and nothing more, allowing them to work with identical development environments and stacks. They can also develop directly in a container as it gives developers a separate network stack and storage without the overhead of building and running a virtual machine.
Additionally, container adoption is accelerating. In the 2019 State of Multicloud Report, over 60% of IT professionals said that they are on their journey to containers/cloud native. Containers facilitate continuous integration, delivery processes, and encourage the use of stateless designs. Containers run their own init processes, filesystems, and network stacks, which are virtualized on top of the VM or a baremetal host OS. Here are some things to consider when deciding whether to run containers on virtualized infrastructure or bare metal.
Running containers inside virtual machines on top a hypervisor can result in up to 10% overhead depending on the virtualization technology used, which is only relevant for some applications. For example, applications that are sensitive to network latency or application that require high disk io will not perform well in a virtualized infrastructure. (for more details, check out this article, “Kuberenetes on Bare Metal: When Low Network Latency is Key.”)
Most enterprises already have well established operational expertise, operational processes and automation, orchestration tools for managing virtualization, which makes it easier to adopt containers running on virtual machines. These enterprises may find beneficial to initially deploy container orchestration on top of their virtual infrastructure, especially as they initially cater only to stateless applications or non-production applications.
Also, the initial, smaller scope and/or departmental use of container orchestration clusters are likely to be easier to deploy on existing virtual infrastructure instead of having to purchase and operate dedicated baremetal servers for each of these initiatives.
Driven by broader adoption of containerized application, enterprise may find it beneficial to engineer dedicated persistent storage and networking services for container orchestration deployed on baremetal infrastructure and also develop automation tools and operational procedures specifically for baremetal servers. This enables them to extend support for stateful applications and also for network or IO sensitive applications. This may also be an opportunity to offer enterprise-wide multitenant container orchestration systems, justifying the investment in dedicated baremetal servers for this purpose only.
IT architects and leaders need to consider the speed at the IT wants offer container orchestration systems to various lines of business, the type of the applications they want to support and the speed at which they want to expand both the scale and the diversity of the supported containerized applications across the enterprise. These considerations will lead an enterprise IT organization to offer container orchestration on top of an existing virtual infrastructure, or existing public cloud infrastructure or to create a baremetal offering or a combination of these.
From the perspective of the application developers, they get the same abstraction in any combination of these scenarios, which affords the IT organization to start with the easiest choice and adjust or shift the infrastructure they choose to use to deliver these services without changing the application developer experience. It is important to choose container platforms and management systems that can ubiquitously support on-premises virtualized infrastructure, public cloud and baremetal servers, preferable a combination of all of them.
We expect that most enterprise will end up with hybrid and multi-cloud container orchestration systems, some of them are running on virtual machines either on-premises or on the public cloud and some part of the container infrastructure running on baremetal servers, especially for enterprises with a container first application deployment strategy.