Most customers I come across who already run containerized applications are dealing with the complexities of managing multiple resources needed for their environments. I find that there are 2 types of challenges:
Let me start by admitting that excel is one of my favorite tools in the history of tools. I am an excel junkie! I spend time going over the new functions as they came out, I love the keyboard shortcuts, and I’ve probably written hundreds of macros. It is a phenomenal tool.
Capacity planners are often faced with a difficult decision to make, because much of their job requires balancing the challenging tradeoff between application performance and infrastructure efficiency. The responsibility of the capacity planner is to understand when more hardware is needed to assure application performance, while at the same time, avoiding wasted hardware. The traditional approach involves figuring out the current excess capacity, then trying to match it with future growth. This is usually done against key metrics such as CPU and memory.
Turbonomic is a pretty unique product for many reasons. One of my favorite is ease of installation and time to ROI. Typically when a new customer joins the family we provide an RSE (Rapid Success Engagement), consisting of 2 days of training and 2 days of professional services. Generally speaking, after that quick engagement a customer already has Turbonomic controlling parts of their environment.
If you are a data center professional that has been around the block, you understand that you either keep up with technology or you become obsolete. You are probably looking at some of the newer IaaS and PaaS offerings. Perhaps investigating Docker or other containers for your organization. The public cloud providers are an option that becomes real by the day, however – high barriers to usage will probably lead you to test OpenStack, the leading private cloud solution in today’s market. Explore the possibilities of developing an open source IaaS while moving away from your vendor lock-ins.
It is difficult to decide on the amount of hardware needed for your datacenter to assure application performance. In addition to the huge complexity we discussed before, I would like to concentrate on a single consideration – peak utilization demand.
Topics: Servers and Hardware
IT budgets are not increasing. As a matter of fact, Gartner indicates that the global IT budget is shrinking. Since the demand for IT only grows, IT professionals are left with the difficult task of doing more with less. Cutting costs is inevitable and has to be done intelligently. Here are 5 suggestions for surviving in the process:
Topics: How To...
Our industry changes fast and is driven by disruptive companies that rethink possibilities. 15 years ago, no one thought virtualization could be done with open source solutions, no one dreamt of containers, and most importantly – it was crystal clear to most organizations that their infrastructure needed to stay in-house. Their data had to physically sit in their company. First and foremost due to security, shortly followed by cost.
Lately it seems to me like migration season is upon us. Specifically, workload migrations. Every other customer call I join I’m asked about how VMTurbo can assist planning with migration across datacenters, across clusters, across hypervisors etc. Generally speaking, with or without VMTurbo, the planning is a 3 steps process.
New approach to IT operations management
100 years ago Albert Einstein came up with the theory of general relativity. The theory was proven at the Solar eclipse of May 29, 1919. Exactly 96 years ago today.