Few people will argue that public clouds haven’t dramatically changed the industry in the past few years. There has been a tremendous increase in popularity of public cloud services such as Amazon AWS and Microsoft’s Azure, and the demand keeps growing every year. Concerns that prevented companies from choosing public clouds as their go to infrastructure, such as security, reliability, stability, quality assurance and more are becoming more manageable.
It’s the 7th of the month and your CFO just received another cloud bill. Again, just like many times before, it has reached an all-time high much sooner than predicted or budgeted for. Sound familiar?
When you think of why customers move to the cloud, there are a few key things that they're trying to achieve.
2018 continues to demonstrate the unstoppable adoption of public cloud. Morgan Stanley predicts that in only 2 and a half years from today, almost half of all workloads will live in the public cloud. Cloud is no longer a trend that can be ignored. As it becomes part of more enterprise systems and mission-critical applications, software - not people and spreadsheets - will be required to responsibly unlock true elasticity and responsible agility. With this in mind, AWS today announced its inaugural launch of a new AWS Cloud Management Tools competency, of which Turbonomic is an inaugural launch partner. Achieving AWS Cloud Management Tools Competency differentiates Turbonomic as an AWS Partner Network (APN) company that provides specialized, demonstrated technical proficiency and proven customer success, with specific focus on workloads based on Cloud management and Resource & Cost Optimization. To receive this designation, APN Partners (like Turbonomic) must possess deep AWS expertise and deliver solutions seamlessly on AWS.
On my previous post I covered what makes managing and controlling cloud environments so difficult. The overwhelming amount of configuration options available together with the constant fluctuations in demand makes it extremely challenging to optimize cost and performance.
It Should Be a Straightforward Question: “What is the most cost-effective choice for my applications – AWS or Azure?”
When you have to choose between AWS and Azure, you need more than a preference. You need a detailed justification for the CIO, the CFO, and the Board.
Would you skydive with a parachute that you knew could only open 80% of the way? Would you build a house with a plan that you knew was only 80% accurate?
The third type of solutions attempt to resolve the problem using batch analytics (check out my previous posts on the manual approach and the rules-based approach if you aren't up to speed yet on the first two approaches). They take a dataset from a single point in time and run a complex analysis on the dataset to come up with the best outcome for the estate.
Last week I discussed the Manual Approach to unlocking true elasticity and this week I will continue by discussing second type of solution commonly seen in the industry: attempting to resolve the problem by applying rules that kick in when a threshold is crossed (if X happens, do Y). The rule usually applies to a single resource and will adjust allocation based on that resource alone. For example, “if CPU is above 80% utilization move one size higher within the family”, “if CPU is below 20% move one instance type smaller in the family type” and so on.
In my previous post I discussed the problem facing most organizations in various stages of cloud adoption – how to deliver application SLA as efficiently as possible, only paying for what they use and only using what they need.