<img alt="" src="https://secure.bomb5mild.com/193737.png" style="display:none;">

Turbonomic Blog

How can you eliminate the risk of Thin Provisioning?

Posted by Matt Vetter on Jun 1, 2015 10:16:19 AM

Thin Provisioning is the ultimate gamble of the virtual datacenter. As a History major in college, the concept often reminds me of the Great Depression of 1929. Banks, as a rule of thumb, lend out more money than they can possibly cover with their own assets, which almost never becomes an issue. Until it does.

When the Stock Market crashed in 1929, the American public began demanding to withdraw their entire account balances, due mainly to the fact that they were terrified of losing their money as the economy came to a grinding halt. But the banks couldn’t cover the withdrawals. And because there was no governmental backing to provide a safety net, American banks began to fail, one after another, causing billions of dollars in lost savings for the American public. Widespread poverty ensued, and it was years, and a World War, before the United States recovered from this Depression.

Thin Provisioning

While the Great Depression might be an extreme example to bring into play here, I believe that there are a few very relevant lessons for the virtual storage environment. Like a bank’s lending policy of providing money without needing to keep enough to cover the loaned currency, thin provisioning allows for the Administrator to offer a VM a certain amount of disk space to use without having to actually reserve that full space on the disk array.

In theory, this concept is based in sound logic, as end-users frequently request far more disk space than they will ever use. In fact, thin provisioning can allow an Administrator to offer more total disk space than exists on the array itself, also known as overprovisioning. Which again makes sense in theory, as it is a cost-effective way to provide storage capacity to a large number of VMs without filling the datacenter with unnecessary disk arrays.

Until those VMs begin actually using the space they were given. Like worried patrons pulling all of their balances at once from a bank that doesn’t have the funds to cover these withdrawals, disk arrays cannot handle this demand of overprovisioned space, and crash and fail, corrupting data and bringing the environment to its knees as the Administrator makes a desperate call to their storage provider to rush deliver a new set of disks to bring the environment back up and running.

So how do we as an industry prevent the Thin Provisioning Great Depression for the datacenter? For the Administrator with an unlimited budget, this may mean a return to Thick Provisioning, or allocating the full amount of storage space on the disk array. Sacrificing efficiency to assure performance. Sound familiar? It’s the same practice that we often see on the host level, where VM to host density is kept unnecessarily low to prevent mission critical applications from crashing due to resource-constraints. But how can we assure that thick provisioning will solve performance constraints in the first place?

For those of us who live with a realistic budget and in a realistic world, how do we address assuring performance with Thin Provisioning? Through the same methods as we do for monitoring host level utilization: alerting and thresholds. We set a threshold to send us an alert if a VM begins consuming 60, 70, or 80% of its allocated storage space. And then what? What does this alert do, other than tell us that we need to figure out something to do to prevent a crash or corrupted data?

That’s where the Administrator has to roll up his or her proverbial sleeves and begin the lengthy process of placement, sizing and capacity decisions to address this alert. Meanwhile, the environment is growing and changing, as the demand that VMs put on the disk array for storage space fluctuates with real-time end-user activity. Which means that, when the decision is made by the Administrator to address this specific Thin Provisioning risk on a single VM based on one alert, how can we assure that the decision they decided to do based on the alert is even relevant for the new situation, or won’t cause further cascading issues within the environment?

Thin Provisioning vs. Thick Provisioning

In the banking world, we simply provide governmental insurance to the consumer in case of a bank failure, and that ends the panic for the consumer. In the virtual world, however, we are as concerned with maintaining the health of the provider as well as the consumer, and so we must look for a solution that can prevent Thin Provisioning from causing disk array failure in the first place, in addition to backing up our data to ensure it’s not lost.

That’s where VMTurbo comes in. VMTurbo is designed by default to identify Thin Provisioning risks in the environment, and make recommendations and decisions to prevent the issue from ever occurring in the first place. In full automation, VMTurbo will control the environment in a state where Thin Provisioning can continue to provide the efficiency we have come to expect of it, while also removing the risk of the Thin Provisioning Great Depression. If you ask me, that’s money in the bank.

Subscribe Here!

Recent Posts

Posts by Tag

See all