Kubernetes “won” and has become the most widely used container orchestration platform for deploying, scaling, and managing containers. The offspring of Borg has grown up and is in Production. Part of the benefits of Kubernetes is that it can be deployed on many different platforms, or choices for infrastructure providing compute, storage and network resources: on-prem or public cloud; build your own or great commercial offers like OpenShift or the growing provider/Kubernetes-as-a-Service market that includes GKE, EKS, AKS (more about that later); down to VMs or bare metal. The choice is yours.
If bare metal is the way you decided to go, are there differences in the way you should approach managing the workload in Production? Do the trade-offs between assuring performance while being efficient, while maintaining compliance with business and technical policies, change when the infrastructure is bare metal? Can you still leverage automation to take care of managing fluctuating demand?
Organizations that are investing in Kubernetes on bare metal still need to continuously assure the performance of their containerized applications and can leverage automation. The same tradeoffs that exist in virtualized environments exist for container platforms on bare metal. You still have workloads contending for what they need to perform from a shared pool of resources. While bare metal can provide great performance benefits, the increased scale of pods per nodes means even more points to manage. The good news is that Turbonomic’s analytics and automation can provide you the same value of creating a self-managing Kubernetes container platform even for bare metal implementations.
Why bare metal?
It depends. For some it’s about better performance. Advances in technology have made containerizing even extreme low-latency applications possible without performance degradation. Data intensive workloads like “big data” and machine learning applications require levels of computing power easily achieved with bare metal. Cloud providers like IBM Cloud have announced managed Kubernetes services on bare metal. Bare metal implementations can offer great scale and node density. We’ve heard our customers claim anywhere from up to 500 pods per physical node, and the limitations hit are usually the laws of physics around network. For other organizations, it’s about cutting costs. They’ve determined that the labor costs of managing container systems on bare metal are the same or less… and they can avoid the additional licensing that comes with hypervisor-based compute.
Whatever the reason, these applications need to be dynamically managed to truly leverage the elasticity and scalability that Kubernetes offers.
It’s still difficult…
Navigating the tradeoffs that exist in these systems is beyond human capacity to manage. That’s why Turbonomic exists—and the core capabilities it offers for Kubernetes work on bare metal as well as any other platform (VM, managed services, etc). To effectively manage your Kubernetes deployment, there are a number of decisions that operators have to continuously make, and they are common to even bare metal:
- How do I configure my containers and pods? How much CPU? How much memory?
- When do I scaling out? By how much?
- When should I scale back? By how much?
- How will my cluster assure newly deployed pods will have resources for changing demand?
- When do I need more nodes? Do I have too many nodes?
Turbonomic handles these decisions for you, making real-time complex decisions that people can’t—and, to be honest, would rather not—do. All this support, regardless of how you deployed Kubernetes!
Rescheduling: When it comes to newly deployed pods, Kubernetes will place them on whatever node has the available resources it needs. Pod location is static, and under fluctuating demand, you need to avoid congestions that would lead to resource starvation or resource fragmentation, and don’t wait for a pod to die. This fragmentation can impact new pod deployments because there isn’t enough room and spinning up a new bare metal node can take time. Turbonomic will identify and action which pods should be rescheduled to run on which node to prevent congestion, better workload distribution to prevent fragmentation, and better utilize the bare metal nodes.
Cluster Management: How do you know when you need more nodes? How do you know if you’ve overprovisioned? Turbonomic understands application demand and supply from Kubernetes and will give recommendations for when to horizontally scale nodes. Turbonomic also allows you to simulate changes to plan for cluster expansion to accommodate more workload.
Optimizing Container Configurations: When you dedicate a certain amount of CPU and a certain amount of Memory to pods, that decision is replicated every time you scale out. What if you got it wrong? Most folks have to guess and try again to hone in on the right configuration. Turbonomic does it for you.
Lastly, visibility. I mention this last because we’re all about automating things that people shouldn’t have to do. But, a picture paints a thousand words. Our customers love the simple fact that they can see everything in their environment and how it’s all related. This supply chain view isn’t just about visibility though, it’s how Turbonomic understands the interdependencies between applications, pods and nodes and how it makes the right decisions with the ability to automate the actionable.
The Bare Metal Bottom-Line
As bare metal environments scale the same the trade-offs between assuring performance while being efficient, while maintaining compliance with business and technical policies become more complex. It requires software to understand application demand, interdependencies in the Kubernetes infrastructure, and make decisions in real time. That’s Turbonomic.
Want to know more about Kubernetes?