Turbonomic Blog

The 100 Million Dollar Millisecond: The Cost of Latency in the Data Center (Part 1)

Posted by Ryan Strehlke on Jan 13, 2016 3:31:11 PM

latency-monster-image

The old adage “time is money” has never been more relevant than in today’s data centers, especially those of financial institutions and brokerage firms. Since the mass adoption of virtualization, the lifecycle of an electronically placed trade has gone from minutes to seconds, and is now under a millisecond for “ultra-low latency” shops. According to a recent paper entitled The Cost of Latency in High-Frequency Trading, a 1-millisecond advantage in latency can be worth upwards of $100 million per year. So, if firm A trades with 2 ms of latency and firm B trades with 4 ms of latency, all things being equal, firm A sees a $200 million per year benefit. It seems only logical to combat latency as much as possible, but how can humans detect, let alone reduce latency by such imperceptible amounts? Where would you even begin?

Spending to Save is a Costly Approach

Well, in attempt to reduce latency as much as possible, financial firms, and companies across other verticals, have taken the “spend money to save money” approach. Significant capital is invested in reducing the distance that data has to travel, both between physical and virtual endpoints. Trading firms will co-locate their datacenters in the same facility as the exchange they are booking trades with, in attempt to eliminate latency associated with physical distance. Likewise, converged and hyper-converged infrastructures are gaining popularity, being used to keep even the virtual distance that data has to travel as minimal as possible. However, as we already know, throwing resources at workloads, rather than controlling an infrastructure based on application demand, cannot assure application performance. It’s also certainly not the most cost efficient approach.

How Do Organizations Approach Latency?

For organizations with a limited IT budget, the spending approach may not even be a feasible option. That doesn’t mean that limiting, and even preventing latency isn’t still a critical requirement. A recent VMTurbo-sponsored survey of technology professionals across different verticals found a discrepancy around not only how latency is measured within organizations, but that some don’t attempt to track latency at all.

VMTurbo 2015 Survey: Measuring Latency

For those that do measure delay, the attempt to limit latency congestion differs as well. Most are leveraging some sort of monitoring tool, while a smaller subset have moved towards all-flash storage arrays or software-defined networking. Whether within storage, network, compute, or the VM itself, latency is a burden at every level of the IT stack. Trying to combat delay within an isolated IT domain (only compute, storage, or network, etc.) is a futile, and eventually costly, approach.

VMTurbo 2015 Survey: Latency Mitigation Tactics

Let’s think about a real world scenario. A large hedge fund that trades across different global exchanges has just had their IT budget cut for 2016. Let’s call this firm Green Circle Financial, or GCF for short. GCF has historically tried to “throw hardware at problems” by over-buying resources and spinning up the biggest virtual machines they could. After a few years of overspending, performance actually slightly degraded. The C-level was not happy. How could GCF have spent so much money on compute, storage, and network, yet still not see increases in performance? After hundreds of thousands of dollars, and a few wasted years, latency has remained unchanged. Now, with a slashed IT budget for 2016, what is GCF to do?

 

This year the challenge is to do more with less…

Topics: Virtualization

Subscribe Here!

Recent Posts