Undoubtedly, flash storage is one of the most expensive commodities we can buy for our data centers. As a result, most companies are forced into a hybrid model where they run a combination of disk-based and flash storage. Notably, as the enterprise adoption of flash increases, the continuous prioritization of workloads on flash vs. disk is crucial to both performance and efficiency. In other words, workloads that demand a lot of IOPS should be able to access flash storage, while more idle workloads can do with disk. The challenge is ensuring the specific storage demands of these workloads are met continuously and in real time.
This process of implementing enterprise flash solutions begins as you start to migrate workloads onto the data stores mapped to the flash arrays. This process is difficult because it’s left up to the operator to manage the tradeoff between two unique variables: which workloads to move and how many. Sure you can move all the workloads over you want, but how far can you push space utilization across the flash environment? Or, on the other hand, we can take the time to determine specific capacity requirements across the arrays, but how do you know if all IO intensive workloads have access to the flash arrays? Managing the tradeoffs between capacity and performance in real-time is nearly impossible and can lead to inefficiencies or sluggish performance.
Even if we managed to manually select and prioritize the perfect workload placement across storage tiers, the hardest part comes next: understanding how to grow into your flash system before you make another $100,000 purchase. The answer not this: simply waiting until one of the metrics hits some pre-defined threshold/barrier. Let me explain through an example.
Growing Into Your New Flash Storage System
For simplicity, let’s say we have a single flash array that can throttle 200,000 IOPS and has 1 TB of space. We are currently running 5 VMs on there each with 40 GB of space. As the environment grows utilization increases on the flash pool and bottlenecks begin to arise.
During growth, we wait until our thresholds have been crossed or for a notification to come through telling us there exists a problem, and then the administrator determines how big the next flash array needs to be for purchase…this mode of operations is not financially scalable.
By leaving workloads on flash at all times, and simply growing it without leveraging disk during idle times creates opportunity for overspending and inefficiencies. If we try to be too cautious about running a small number workloads on flash then we don’t utilize the array to its potential. Finding that sweet spot is nearly impossible to do especially since workload demands are continuously fluctuating.
Finding the Disk vs. Flash Storage Sweet Spot
What if you could intelligently migrate workloads between disk and flash in real-time as VM storage demands rise and fall, automatically? VMTurbo’s platform integrates with flash and disk based systems like Pure, Netapp, EMC, and others to develop an understanding of supply and demand at each layer in the stack. Each datastore now reflects the capacities of pools they are mapped to, giving VMTurbo’s system the intelligence to place workloads accordingly. A byproduct of placing workloads based on IO demand in correlation to the IOPS capacities of the array drives greater headroom and densities. In fact, VMTurbo can even recommend increasing the size of a flash pool if storage resources cannot satisfy workload demands. Now operators/businesses can grow into their disk-and-flash-based model without having to manually configure anything or manage the IO volatility at greater densities. Essentially, turning the keys over to our decision engine to manage the tradeoffs between capacity and performance increasing OpEX and CapEx in the datacenter.