<img alt="" src="https://secure.bomb5mild.com/193737.png" style="display:none;">

Turbonomic Blog

Why Mitigating Storage and Network Latency Should Be Like Self-driving Automobiles

Posted by Bosheng Song on Jan 29, 2016 7:00:06 AM

In today’s Internet age, we implement new technology and build new products to achieve one goal - to run applications faster, more fluent, and less “laggy”. However, even when we do have the capacity to power our applications, IT admins and developers often battle against the elusive frustration of storage and network latency [for some reason compute gets a pass :-) ].

Now, imagine solving the storage and network latency issue as mitigating traffic jams in a city. Just like a car trying to get from point A to point B through navigating the road system, a virtual machine also needs to access the capacity it demands to power the applications through storage and network traffic.

As I saw some of the latest developments for this year’s CES I was amazed with all of the computing capacity that is now being put into automobiles to enable the hundreds of thousands of decisions required to power a self-driving car and avoid congestion.

So can we model the problem of reducing latency to assure application performance as NVIDEA and Volvo are doing to relieve heavy road traffic?

Be Infrastructure Agnostic

Just like there are multiple routes to get from … to … there are multiple ways to get a VM the resources it needs.

As an average user, we do not care about where the computing capacity comes from to power our applications. Since the experience is not biased towards how we get the on-demand content from our applications, we should also be agnostic towards supplying the storage and network capacity to those applications. Should we restrict virtual machines on certain datastores just because it’s easier to look at in vCenter, and should we create a fancy network topology just so we can strictly separate different types of traffic? When we make design choices like those on our virtual infrastructure, we are also giving up the opportunity to let the virtual machines choose what the best is for them.

Price the Routes

mitigating-latency-price-the-routes (1)

From a driver’s perspective, tolls suck. However, what if we dynamically assign a higher toll price for more congested data routes? If we utilize software as an abstraction platform and give each VM a budget to spend, we can define an invisible abstraction layer and price out the storage and network capacity based on the IOPS and latency utilization, so that we are essentially creating a dynamic market of tolls where the virtual machines go out and purchase data access from the datastores and network topology. Again, virtual machines don’t care about how they can eventually access the on-demand data from physical disk arrays as long as the application performance is assured, thus the virtual machines as toll consumers shouldn’t mind it either when taking a longer “detour” in the virtual infrastructure can get them a better bargain in order to mitigate heavy traffic, reduce overall latency, and access the on-demand data faster.

Implement Self-driving Automobiles

Wouldn’t it be great if we can build self-driving cars and aggregately map out the optimized routes for everybody in real time before we all hit the roads? In our reality, the paradox we face is that tools like GPS can only tell us where a certain route is congested before we get to that area, but it is not until when we are actually part of the traffic do we realize the impact of the complex traffic. As a car like every other car in traffic, would having my car in the “light traffic” cause congestion and turn it into “heavy traffic”? As crowdsourcing GPS apps emerge in the mobile world, would those more frequent traffic alerts help me avoid heavy traffic more proactively when I’m already driving on the road?

A truly proactive approach for virtual machines works like self-driving automobiles—they look at the storage mapping and network topology from a bigger picture first. Having a self-driving system solves the problem of mitigating storage and network latency by matching the workload demand to infrastructure supply in real time.

If things work with little latency, we’re happy. That is, when we can tap a mobile app and access the UI without delay, when we click on a Facebook link and have the page load instantly, or when we watch a YouTube video and have it load without buffering. When we learn from modeling virtual machines as cars driving on the road, we get a clear picture of how we can help the applications get their data with less lag.

fast-softwareWith software making decisions we can achieve so much more beyond mitigating latency on network and storage, because with VMTurbo controlling the storage and network traffic we are not only proactively guiding the virtual machines to a healthier state in real-time, but more importantly, also giving the application users a more enjoyable experience. VMTurbo has helped 1,000s of users with its software-driven approach to control their virtual infrastructure, why not see for yourself?

Topics: Networking and SDN, Storage

Subscribe Here!

Recent Posts

Posts by Tag

See all