<img alt="" src="https://secure.bomb5mild.com/193737.png" style="display:none;">

Turbonomic Blog

Scaling the Infrastructure to the Cloud: The Endless Chase for Infrastructure Efficiency Part I

Posted by Matt Vetter on Apr 13, 2016 8:28:22 AM

scaling to public cloud

The public cloud: the final frontier. To go boldly where no application has gone before.

Star Trek references aside, the public cloud, or the hosting services offered by companies like Amazon, Microsoft, Google, IBM or VMware, is often seen as the next logical step of the datacenter. In our constant pursuit of the most economically efficient model for hosting applications, public cloud seems to be the goal of many enterprises (no pun intended) in the near future. In this post, I will discuss the origin of the public cloud, the economic and performance benefits offered by migrating to these offerings, and the challenges and potential limitations that keep companies using on premise architecture long after their planned transition to one or more cloud strategies.

The origin of the public cloud is as big of a revolution for the IT space as virtualization was back in 2001 when VMware developed and began marketing ESX. While many companies had been toying with the idea of hosting more than one application per piece of hardware for years before VMware, even to the point of some companies and open source collectives producing viable products, VMware’s ESX hypervisor technology was widely considered the simplest, most efficient, and easy-to-use offering. As a result, it gained instant recognition and market share, and soon grew to be the hypervisor of choice for the majority of enterprises in almost every industry. In comparison, companies like AWS and Azure were not the first to invest in managed hosting services, but their rapid growth and widespread market adoption is due to the same factors listed above: simplicity, efficiency, and ease of use.

Much like what we are doing with public cloud infrastructure now, we started by virtualizing our non-mission critical workloads, workloads that were seen as easy selections for the new technology. Even by going from 100% dedicated to 5% virtual, the savings on infrastructure efficiency alone made the concept of “doing more with less” a reality. In my conversations with our customers and prospects, the same situation often holds true for the types of workloads that we are “bursting” to the cloud: non-mission critical VMs, development VMs, simple VMs. Again, much like the transition to ESX, the economic gains were immediately recognized in infrastructure efficiency.

The next step of public cloud is so tempting for many reasons, but chief among them is the oldest goal in the history of IT: guaranteed application performance. By letting a provider worry about the many different levels of performance risks in a datacenter for us, we are simply left with high performance applications that keep our end users happy and off our back. We trade monitoring software, alerts and late night phone calls for happy application owners and the ability to innovate. There is a reason why AWS has been producing consistent profit and growth for their parent company Amazon: the concept seems too good to pass up on.

So how do we take the lessons of virtualization and apply them in our goal of going 100% cloud? We must look at the challenges that we continue to face in the virtual environment, and learn how to not replicate them to prove that AWS, Azure, Softlayer, vCloud Air and other public cloud providers are worth the investment we make in them. At VMTurbo, we always talk about two major goals in virtualization that often conflict with one another: guaranteeing application performance, and maximizing infrastructure efficiency. As we continue to drive up the utilization of our infrastructure, the risk of application performance degradation increases exponentially. This has led many enterprises to sacrifice efficiency for lower levels of risk, letting our infrastructure scale up far too quickly and wasting compute, network and storage resources in the process. In the cloud, the same idea holds true: we demand the highest levels of application performance, and so we let our public cloud footprint grow far too quickly, leading to costs rivaling or even exceeding the hardware and software costs of hosting the applications internally. Often, CIOs quickly halt cloud initiatives the first time they get the bill from their public cloud choice; the price is simply unsustainable if we treat the public cloud the same way as we treated our own internal infrastructure. Once again, we are left with seemingly two conflicting goals tugging at us: performance and efficiency.

So how can we do both? Is there a way to guarantee that we burst the right workloads at the right size for the right reason at the right time, while preventing our public cloud footprint from growing out of control? Is there a way to simplify transitioning between multiple cloud providers to find the best priced cloud at the right time?

In the second part of this piece, I will discuss more in depth how VMTurbo plays in this space, and how the technology is designed to tackle some of the most common challenges we face when moving to a public cloud infrastructure.

Topics: Cloud

Subscribe Here!

Recent Posts

Posts by Tag

See all