“I’m using NetApp as my SAN, how can I have VMTurbo see and manage my storage layer?” Due to the nature of how VMTurbo is often introduced, many immediately feel that VMTurbo is a platform exclusively for the compute and VM layers of the IT stack. Most likely, it’s due to the fact that the industry has traditionally created products this way, either focused on the compute layer, or focused on the storage layer. Or perhaps it’s because companies like NetApp, EMC, HP, Dell and Pure have put large investments into creating software products to manage their storage environments when it comes to auto-tiering, replication and thin provisioning. Either way, we often lack a method of easily mapping the interdependencies of an entire IT stack within our environment, and the ability to understand how to drive out risk when it comes to the relationship between the virtual environment and the SAN.
Operations Manager Capabilities
While it is true that the Operations Manager by default pulls its information from the hypervisor level, this scope does not prevent VMTurbo from being able to drive value to the storage layer. In fact, most of our customers decide that the core platform can drive better performance and efficiency within the storage layer, and that the Storage Control Module is not a need.
To use an example, I was recently working with a customer using XenServer who was seeing issues around latency in his environment, which he believed was due to his IOPS. After installing VMTurbo, VMTurbo found three separate pools where there were datastores with VMs consuming large amounts of IOPS, causing other VMs within those datastores to experience latency. Because VMTurbo inherently understands resource demand within the environment, and matches it to the proper infrastructure supply, the Operations Manager quickly located other datastores within each of these pools where IOPS was not being consumed, and therefore was much more available for the VMs with latency. After checking that VMTurbo had indeed done the decision analysis to make sure the decisions would not cause new issues to occur, and after executing a few of those actions, the prospect quickly asked how we could automate these actions so he could prevent future bottlenecks.
Without even needing to understand the true IOPS being driven by the underlying SAN, VMTurbo was able to make accurate, preventative recommendations to prevent performance degradation in these resource pools.
That being said, there are certain use cases where VMTurbo’s Storage Control Module can provide a strong added value to a datacenter, which I will cover below, in addition to areas where it provides more value than others. The Storage Control Module currently manages NetApp, Dell Compellent, HP 3PAR, EMC VNX, XtremIO, and VMAX, as well as Pure Storage.
By adding a storage target, VMTurbo’s economic scheduling engine can access a new layer of the IT stack. It will then pull the added information on Pools, Arrays and Controllers into the supply/demand algorithm that drives all decisions from VMTurbo. Essentially, what we are allowing VMTurbo to do is to add another aspect of the virtual marketplace, where Virtual Machines and Datastores can now purchase resources from for their basket of goods created by application demand.
So what does this mean for my environment? Well, quite a bit actually. It means that, not only can VMTurbo understand how to map a VM’s interdependencies throughout the IT stack, but it also allows VMTurbo to provide decisions around sizing, placement and capacity in the storage layer using the same real time market-based analysis.
Example Use Case
I recently installed the VNX control module in a customer’s environment who wanted to trial the added functionality. Once up and running, VMTurbo discovered the true IOPS of the LUNs underneath each of the datastores, and that autotiering had created a LUN with better IOPS than another LUN. VMTurbo also noticed that, despite IOPS being higher in the first LUN, IOPS was currently being constrained in that LUN. VMTurbo then detected a datastore that was consuming IOPS, but fewer than any other datastore on the constrained LUN. It then did the decision analysis to determine if the datastore could perform on the LUN with slower performance, and if the slower LUN would see any ripple effect across any other layer by making that move. When it didn’t it presented that decision to the customer and I. All of this was done automatically in a matter of seconds, with the goal of preventing the constrained IOPS on the first LUN from causing performance degradation, while not causing any ripple effect. If we had decided to take the decision right then, a simple API call back to the VNX controller would have executed the decision for us. If we had decided to wait until the next day and the situation had changed to the point that the decision was no longer relevant or could be done without any ripple effect, VMTurbo wouldn’t have presented it to us.
Essentially, what the Storage Control Module adds for my environment is a method of extending the reach of the invisible hand of the datacenter. It’s a single common data model that drives and controls one of the more difficult parts of the IT stack to a desired state, while also incorporating the added information to the interdependencies of the environment as a whole.