<img alt="" src="https://secure.bomb5mild.com/193737.png" style="display:none;">

Turbonomic Blog

Intelligent Workload Automation for Kubernetes with Turbonomic 6.3

Posted by Eric Wright on Feb 28, 2019 11:00:00 AM
Eric Wright
Find me on:

Containerized workloads are rapidly increasing in popularity as a method to deliver and manage infrastructure for cloud-native applications and microservices architectures, which are also rapidly increasing in popularity as a way to build applications.

With our latest software release (v6.3), Turbonomic has enhanced our Kubernetes integration to enable customers to realize optimal performance and consolidation benefits of intelligent workload automation in their Kubernetes deployments. 

Turbonomic’s management of performance, compliance and cost tradeoffs are critical to achieving production-ready Kubernetes. These new platform enhancements identify opportunities to consolidate Kubernetes workloads while continuing to assure performance and maintain compliance. This enhancement is part of the real-time optimization, as well as plan simulations. 


Extending the Shared Value of Kubernetes and Turbonomic

Your needs are continuously changing along with your infrastructure choices. Adopting Kubernetes that is powered by Turbonomic means getting the same outcomes of increased performance, more efficient use of your infrastructure, and important policy compliance for your workloads. The value goes far beyond the constraints that Kubernetes can understand because Turbonomic works across the full-stack, above and below the Kubernetes infrastructure layers.  

Having application-aware analytics that run below into the physical, virtual, or cloud-based node infrastructure means that important use-cases are handled that are not a part of the Kubernetes capabilities, such as:

  • When do you scale the Kubernetes node infrastructure? 
  • What happens to active pods when node health and performance changes?
  • When do you scale containers up or out?
  • What are the right cloud instances to run for Kubernetes and your workloads?
  • Is my managed Kubernetes service scaled correctly and responsibly for performance and cost?

Turbonomic automatically answers these questions right out-of-the-box. We are developing both internally as well as through community contributions upstream to Kubernetes to help grow adoption and to unlock the true elasticity of Kubernetes in a responsible and application performance focused way.

Hybrid Infrastructure with Reduced Risk 

Choosing where to run Kubernetes and where to run your workload atop your Kubernetes environment is a decision that Turbonomic is making for customers today. By delivering better workload performance, on less infrastructure, while maintaining your placement and availability policies, Turbonomic unlocks a new elastic Kubernetes infrastructure opportunity.   

Whether you're running on bare-metal, on virtual machines on-premises, on public cloud, or in any hybrid implementation, we have you covered.  And most importantly it is with reduced risk of performance loss, runaway costs, or policy risk for your applications.   

We’ll be digging into these capabilities and more during the Accelerate and Optimize Your Hybrid and Public Cloud Adoption Strategy webinar, on March 13, 2019. Join us to see these new capabilities in action with both live demos and an interactive forum with Turbonomic experts.  

Subscribe Here!

Recent Posts

Posts by Tag

See all