<img alt="" src="https://secure.bomb5mild.com/193737.png" style="display:none;">

Turbonomic Blog

100% Virtualisation. Mission Impossible?

Posted by Bunmi Sowande on Feb 3, 2016 9:24:31 AM

Virtualization mission Impossible

Your Mission, If You Choose to Accept It…..

In a previous life, I worked with a Managed IT Services Provider in the UK. My role involved designing enterprise digital platforms using our state of the art data-centres and the public cloud.

About 15 months ago, a group of super-intelligent Solution Architects (including my humble self) were working hard on a platform refresh. Our task appeared simple, we were updating our client’s back-end infrastructure, and as part of this, we were aiming to move them to an ultra-modern, future-proof setup. The public cloud was going to feature prominently, but more importantly, all the on-premises workloads were going to be virtualised. (I mean, who uses physical servers anymore?) Or, so we thought.

Even the Experts Struggle with This

As with a lot of things, our plans came up against reality and we ended up not making several compromises along the way. In the end, a number of physical servers were included in the design, due to several factors, which I’ll discuss later.

Turns out, we were not the only ones struggling with a similar problem.

If I ask you to say the first word that comes into your head when I say “Virtualisation”, the answer will most likely be “VMware”. The company has been, for the most part, responsible for the technology and the massive adoption of this “new” way of delivering IT infrastructure. Reading this article – “Even VMware finds reaching 100% virtualization a challenge”, it was interesting to see that the engineers who worked on this project ran into pretty much EXACTLY the same problems that we did.

I’ll highlight a few of our challenges:

Licensing/Compliance: - The first snag we hit was around licensing. In our case, virtualising a small number of Oracle servers was going to be incredibly expensive to license. Without mincing words, the licensing model just doesn’t work in a virtualized environment. We looked at our options and even spoke to someone from Oracle, but we could not find any way around this that would satisfy us and our customer. In the end, we had no choice but to build out two PHYSICAL Oracle clusters, geographically separated to provide redundancy.

No room for error: For legacy applications that were being virtualised for the first time, we had to sit down with the application teams, some of who were not happy with moving away from physical boxes. They had a number of concerns, particularly, how would we guarantee that the overhead from virtualisation would not lead to performance problems. We put a lot of time and effort into convincing a number of teams that we had an Operations team that could manage the environment, with monitoring systems set up 24/7/365, and competent response teams to handle any incidents. But one or two business critical systems did not end up being virtualised in the first phase, as they were not willing to pay the “virtualisation tax” on performance.

The bigger they are, the harder they fall: Some of the virtual machines we sized were going to be HUGE. So large, they would eventually take up an entire host. These were primarily large Database servers. With this in mind, we eventually decided to keep some of these on physical servers, with log-shipping used to provide redundancy. The benefits from virtualisation are minimal, and as the workload was effectively not able to migrate to another host, the virtualisation overhead simply isn’t worth the risk. But could the Virtual Server be sized differently? Were we ever going to use all that Memory and CPU? We never found out.

Building houses on sand?

The thing about virtualisation is that there are SO MANY moving parts. A small change in one part of the environment has a knock-on effect. Speaking to someone this week, he mentioned how a change he made didn’t cause an outage, until it did, one month later.

If you want to build a reliable, available system, using virtualisation means you are doing this using several unreliable parts, or what we call the n-dimensional problem.

Moving a production application to a virtual platform is an easier decision than say ten years ago. Virtualisation is now mainstream, in July 2015, Gartner reported that about 75% of all x86 server workloads are virtualised.

How do we close this gap even further?

Let’s take you there with VMTurbo!

Virtualization not mission Impossible

 

The great news is that if you want to get on the road to 100% virtualisation, VMTurbo can help you get there. Addressing the specific challenges listed above:

  • Why monitor? There is a better option. With a Control Platform eliminate all the noise in your datacentre, and concentrate on your key, strategic activities. By assuring your application performance, VMTurbo gets rid of the virtualisation tax, the constant fine-tuning and set up to keep an environment working can be automated.
  • VMTurbo can help you drive down your licensing costs. Using the Policy Engine can define rules to keep your licensed workloads from wandering around the datacentre, staying only on hosts that have been licensed. This helps drive down costs, and keeps you in compliance with vendors. On top of this, we can reduce the number of hosts you need to license, keeping an eye on your number one concern, guaranteeing performance.
  • VMTurbo can help you to size your workloads properly over time. This provides you with more resource, prevents some performance problems like Ready Queue,

With sizing, placement, capacity, over-provisioning, over-commitment, memory management, the list of things a Virtualisation administrator needs to keep an eye is a mile long. It’s beyond human scale. Why not let software do it?

 

Topics: Virtualization

Subscribe Here!

Recent Posts

Posts by Tag

See all