<img alt="" src="https://secure.bomb5mild.com/193737.png" style="display:none;">

Turbonomic Blog

#CLUS is done, but Turbonomic & Tetration work continues

Posted by Enlin Xu on Jul 6, 2017 8:37:43 AM

Last week’s Cisco Live show in Las Vegas was a great success. At the Turbonomic booth, we had one demo station continuously showing our latest Turbonomic & Cisco Tetration integration. In the demo, I showed how Turbonomic leverages Tetration to obtain flow details of endpoints, and localize the network traffic by placing VMs from different UCS blades to the same one.

You can check out my demo below:

 

We had several great conversations, including some with Cisco, Travelport, SAP, and more. People were excited to see how Turbonomic uses Tetration telemetry and analytics to provide real-time placement decisions that reduce network latency, while also accounting for a workload’s compute and storage needs. As the modern datacenter transforms with micro-service architecture on the top and hybrid cloud infrastructure at the bottom, having an autonomic system with network-aware placement is essential and critical.

How Does It Work?

Tetration provides real-time monitoring of the network, Turbonomic takes that information and makes real-time VM placement decisions that account for the east-west communication between them. Leveraging Tetration, Turbonomic collects the real-time endpoint flow information and maps the Flow matrix to corresponding VMs discovered through the hypervisor.

The basic problem is that “chatty” VMs placed far away from each other will experience latency. With the metrics pulled from Tetration, Turbonomic recognizes which VMs communicate a lot. But what does it do about it?

If you’re familiar with Turbonomic you know about our economic abstraction of the data center and cloud environments. So how does this play out in the case of the network?

First, it’s important to also understand the relationships between the network, physical resources, and virtual resources. To do that, Turbonomic extends its supply chain with VPods and DPods.

  • VPod: Group of VMs that frequently communicates, for example, multi-tier applications
  • DPod: Group of Physical Resources that closely located, for example, blades in the same TOR Switch or UCS Domain.

 

Below, vPods and DPods appear in the Turbonomic supply chain.

vPods and DPods Turbonomic Supply Chain

 

It’s also important to account for the resources being used. Our economic abstraction gives a virtual price to resources based on their utilization levels—the higher the utilization, the greater the demand, the greater the price. The network is no different. Flow is simply another commodity in data center market. But the distance between communicating VMs has a direct impact on network latency so we must account for that as well. Level 0 communications within a host will have less latency than Level 1 communications across blades, in the same top-of-rack switches. Likewise, Level 1 communications will have less latency than Level 2 communications across switches or UCS domains.

Turbonomic captures these differences in latency using price.

  • Level 0—Intra-host: Flow that represents chatty VMs in the same host has a basic price to enable workloads to communicate (buy flow) as needed, based on the price as determined by utilization.
  • Level 1—Intra-Dpod: Flow that represents chatty VMs in the same DPod, but different blades, are more expensive. If two VMs are communicating frequently across blades, it’s preferable for them to move to the same blade to pay less for flow. The caveat is that the price for other resources, storage, CPU, Memory are also factors and are simultaneously taken into account.
  • Level 2—Cross-Dpod: Flow that represents chatty VMs across DPods is the most expensive. The concepts described above apply.

Pricing flow in this way allows VMs to buy the net throughput they require while accounting for the latency that distance creates. Remember, workloads are always looking to get the best price for the resources they buy—so if two VMs are talking a lot, they don’t want to be paying the most expensive price for flow.

What does it do?

This abstraction has powerful implications for our customers’ workload management: real-time automatable network-aware placement decisions that keep up with today’s dynamic environments. Software makes the decisions so you don’t have to.

  • Move a VM to improve network latency
  • Move a VPod across DPods
  • Provision a new rack—contention/capacity is the issue

Below you can see two VMs that make up a VPod. tomcat2-ubuntu is seeing some latency.

vpod tomcat latency

Clicking into tomcat2-ubuntu (below) we see that utilization is 100% in the DPod. This VM is communicating with another VM on another host and the network is congested.

100% utlization tomcat dpod

What does Turbonomic do about it? To eliminate congestion, it recommends moving tomcat2-ubuntu from host 192.168.136.22 to host 192.168.136.21. You can execute this action just by selecting it and clicking “Apply.” As customers become more comfortable with Turbonomic, these kinds of actions can be automated in real-time by the platform.

tomcat2-ubuntu move vm to host

What’s ahead?

What happened in Vegas is not going to stay in Vegas only. We are going to work closely with the Cisco Tetration team and integrate with Tetration at deeper level, including supporting Tetration Application Dependency Mapping(ADM) as well as Tetration sensors for containers. Understanding the ADM from Tetration helps our platform make decisions for a group of dependent endpoints and supporting Tetration sensors for container can help us make network-aware placement for micro-services running in containers. Stay tuned!

Subscribe Here!

Recent Posts

Posts by Tag