<img alt="" src="https://secure.bomb5mild.com/193737.png" style="display:none;">

Turbonomic Blog

How The Industry Approaches True Elasticity, Part 3: Batch Analytics

Posted by Mor Cohen-Tal on Feb 26, 2018 3:00:59 PM
Mor Cohen-Tal
Find me on:

The third type of solutions attempt to resolve the problem using batch analytics (check out my previous posts on the manual approach and the rules-based approach if you aren't up to speed yet on the first two approaches). They take a dataset from a single point in time and run a complex analysis on the dataset to come up with the best outcome for the estate.

The problem with these solutions is that they only consider a single layer of the stack, usually the IaaS layer, and the resulting analysis is per VM, based on historical data alone.

Why is that a problem when using Batch Analytics?

Application performance changes constantly in real time.pexels-photo-577585To assure application performance you cannot rely on an analysis process that only runs once a week or once a month. If a VM is sized down but then the demand for resources of the VM is increased, it takes multiple days at best for batch analytics tools to react and re-adjust resources. Most organizations using such tools are forced to size for historical peaks to reduce the risk of application performance degradation. When resources are paid for by the second or by the minute this over-provisioning can become very costly very quickly. For a more in depth explanation of why using historical data alone can’t assure performance look at the excellent post by Eric Wright here.

In Conclusion

These approaches, in almost every scenario, compromise application performance in favor of efficiency and don’t assure either of them. What we hear from an increasing number of enterprises on their journey to the cloud is that while these approaches helped with Dev and Test workloads, as they start to move production workloads to the cloud these are no longer sufficient.

To truly manage a hybrid or multi-cloud estate elastically, without compromising efficiency or performance, you must consider:

  1. The entire application stack: from the load balancers down to the infrastructure, as well as the performance it can deliver (yes, even in the cloud).
  2. Multiple Dimensions: look beyond memory and CPU to all resources consumed and required by applications to deliver their SLA.
  3. Real-time Changes: application demand for resources changes in real time, and for every second allocation doesn’t match demand either application performance is impacted or you are paying for resources you do not need. In addition, cloud providers issue new offerings constantly and it is hard to keep up.
  4. Unified Approach: a single decision engine that drives the entire estate to a single ideal state. Having different sets of analytics driving RI purchases and instance sizing can create conflicts and acting on both may cause more harm than good.
  5. Holistic Approach: look at the entire estate when making decisions and understand the impact to all components, all instances and the cost of making a change to the environment.
  6. Compliance: Doing all of the above while also maintaining important compliance use-cases such as application availability across multiple availability zones and regions, placement and locality enforcement around data sovereignty and geographic boundaries, and many more.

Only with all of the above can you deliver trustworthy actions to solve these challenges.

Subscribe Here!

Recent Posts

Posts by Tag

See all