Today’s CIO needs to be a change agent. Leading companies count on CIO’s to shift the value of IT from “keeping the lights on” to innovating. They are measured on impacting the differentiation of their company and its position in the marketplace.
The HashiCorp products are definitely rising in popularity, and with good reason. I recently interviewed Dave McJannet, CEO of Hashcorp, and talked about the overall Hashicorp ecosystem.
In my latest blog “Less Troubleshooting or Less Troubles” I questioned the goal of modern monitoring tools to collect as much data as possible with the highest level of granularity. I argued that the monitoring market is so caught up in the race for more data, it has forgotten its true purpose – keeping your virtualized environment performant and efficient.
One question many civil engineers receive is, “what is the proper ratio of the depth of a building’s foundation to the height of the building?” Many claim that the safest buildings have a foundation depth that is identical to the height of the building, while others say a depth one-third of the total height is safe enough. Technically speaking, neither assumption is correct. Civil engineers calculate their best estimate based on external factors, such as the soil that will sit underneath the finished structure, and must account for the foreseeable pressure that the structure will exert upon the ground.
Anybody with experience managing antivirus scans, updates, boot storms, or any other of a handful of resource intensive tasks in a VDI environment knows that maintaining performance during these intensive tasks can be a daunting if not impossible. The severe degradation to service during an antivirus storm is enough to kill worker productivity, and in some cases even prevents companies from fully rolling out VDI solutions.
Within our data center more times than not we are striving to accomplish two things simultaneously. On the one hand, we want to make sure the applications that are running in our data center are getting the resources that they need. On the other hand, we want to do so with the least amount of money possible. That is, make sure we get the most out of the infrastructure while at the same time assuring application performance. Really that is one of the huge benefits of virtualizing in the first place. With capabilities such as live migration of workloads and high availability, virtualization helped make sure that workloads got what they needed. As for efficiency, virtualization also helped in that regard with overcommit. More specifically thin provisioning—which I would like to talk about today and even more specifically about managing thin provisioning in say a virtual desktop environment.
It is very trendy to discuss and compare the methods in which data is collected by IT monitoring tools. Different vendors explain why they are the ones that collect the best data, from the best sources and in the best level of granularity. One example is VMware’s “Why data granularity matters in monitoring” blog.
Let me start by admitting that excel is one of my favorite tools in the history of tools. I am an excel junkie! I spend time going over the new functions as they came out, I love the keyboard shortcuts, and I’ve probably written hundreds of macros. It is a phenomenal tool.
Capacity planners are often faced with a difficult decision to make, because much of their job requires balancing the challenging tradeoff between application performance and infrastructure efficiency. The responsibility of the capacity planner is to understand when more hardware is needed to assure application performance, while at the same time, avoiding wasted hardware. The traditional approach involves figuring out the current excess capacity, then trying to match it with future growth. This is usually done against key metrics such as CPU and memory.
When your product is dubbed the DC/OS, as in the data center operating system, it may seem like a rather bold statement. DC/OS has the potential to really hit the mark in a number of ways in achieving this in my opinion. Through this little miniseries of posts we will take a look at the DC/OS platform, and walk through a real deployment and usage on a couple of different underlying platforms.