Recently introduced in vSphere 6.0, Virtual Volumes (more commonly referred to as VVols) provides SAN administrators a more refined and granular approach to storage management. VVols can make SAN administration and VM deployments much faster and similar, if managed correctly. Traditionally, administrators construct datastores within vSphere on top of LUNs which map VM to storage pools. Since shared resources are distributed equally among all virtual machines per LUN, applications that reside on the same LUN mapping have the same storage performance. This presents problems when defining SLAs or QoS on a per VM basis (everything must be defined on the LUN/storage pool). For example, you probably don’t want the same storage policies defined for Microsoft SQL servers as your file and print systems.
VVols provide storage administrators with the ability to set performance standards on a per VM basis. Essentially eliminating the concept that storage is a single unit, but rather disparate. While VVols are a step in the right direction to storage performance, they require administration and complex analysis to run properly.
What is a VVol Exactly?
When I first saw a VVol demonstration at a VMUG last year, I learned at a high level that VVols are comprised of three new systems to make up an entire VM object that resides directly on a storage pool. Each VM object encapsulates virtual disks, VM configuration files, and other VM files. Each object has storage policies defined to it that dictate QoS and SLA adherence. This provides finer control and more knobs and levers that infrastructure administrators can use.
In fact, VVols completely removes the need for VMWare VMFS as each VM component like a VMX file, swap file or VMDK are packaged together on top of a storage container. In VVol terminology, the storage container is a pool of physical storage configured on the external storage appliance. Almost exactly like the pools SAN administrators create on popular platforms like EMC and HP 3PAR. Notably, these storage containers can be provisioned with varying I/O capacities across disk and flash based storage systems, which in turn allows the policies defined per VM to dictate where the VVols are distributed. In order to communicate with the storage container, VVols use a storage provider that leverages new API protocols built natively into vSphere 6.x. Lastly VVols use a protocol endpoint that allows vSphere to identify and see metrics on each VVol within the SAN.
So with finer policy based actions what are the roadblocks or challenges?
Essentially, it boils down to the idea that more granular management only increases the number of potential decisions within the environment. Consider this analogy: without any experience would you rather pilot an X-Wing (with 100+ buttons) or a self-driving spaceship (1 button that says “GO”). The same principle applies to manual gear-shifts in cars versus automatic transmissions. If you have 100+ VMs in the environment today defining policies for every VVol can be very cumbersome, especially in dynamic environments that require constant attention.
Of course, once all policy-defined storage protocols are in place, life will be easier, but getting there is difficult. Admins are still required to do performance analysis because understanding the relationships between storage capacities (supply) and VM storage demands are difficult and always changing. Every time a new VVol is created or a new array is purchased it changes the entire environment because everything is shared (like the ripple effects of a rock thrown into a pond). Policies must be re-defined in real-time to assure performance of every application running in the datacenter. And we all know that time is hard to come by in the life of an IT professional. As a result, we make impulse decisions at every level of the environment and don’t take time to truly evaluate performance impacts.
Which leads me to my next point, the manual labor involved in maintaining a healthy VVol environment. Placement of virtual machines is one thing, but moving an entire volume across storage containers/pools can be daunting. While VMWare can make recommendations where to place VVols, it is all driven off policies defined by the administrators. This raises a few questions: Did we set our policies correctly (user-error) to satisfy application demands? How can we be sure in a scalable environment that our policies won’t conflict? What would happen if two VVols are moved off the same storage container without understanding the impact to IO, latency, space, and neighboring VVols?
In fact, by the time an administrators or policy-based rule kicks off, it is the result of an end-user complaint or an alarm telling us we are no longer satisfying our goal of delivering application performance…
Eliminate the Need for Policy Based Actions with a Decision Engine
VVols are certainly a step in the right direction towards storage management, but they will not solve all your storage problems. Defining policies across sizeable environments is time-consuming and requires continuous analysis and adaptation defined by operators in real-time. There is still a gap in management that relates every system component of the virtual environment to each other by matching supply and demand. In most cases, this gap is beyond human scale and can only be solved in software.
This is where Turbonomic enters the equation, as a real-time application control engine that delivers decisions to maintain a healthy environment. Eliminating the manual labor involved in the daily caring and feeding for virtual volumes. Using all control knobs exposed through virtualization Turbonomic produces decisions based on every element of the environment from application all the way down to storage controllers to reduce risk and volatility. Let Turbonomic take the wheel after VVols have been provisioned to assure every VVol continuously adheres to their respective policies to promote performance and efficiency.