Within our data center more times than not we are striving to accomplish two things simultaneously. On the one hand, we want to make sure the applications that are running in our data center are getting the resources that they need. On the other hand, we want to do so with the least amount of money possible. That is, make sure we get the most out of the infrastructure while at the same time assuring application performance. Really that is one of the huge benefits of virtualizing in the first place. With capabilities such as live migration of workloads and high availability, virtualization helped make sure that workloads got what they needed. As for efficiency, virtualization also helped in that regard with overcommit. More specifically thin provisioning—which I would like to talk about today and even more specifically about managing thin provisioning in say a virtual desktop environment.
There has been tremendous buzz around containers and specifically Docker containers, which began as an open-source project only three short years ago. The idea of a single piece of software in a complete filesystem containing everything it needs to run is incredibly appealing. It means that deployment can be executed more rapidly, efficiency can be taken to the next level and containers will run the same regardless of what infrastructure they are running on. It is no wonder that there have now been over 2 billion Docker Containers downloads.
Given growth and industry consolidation of bigger market participants there is no margin for error in eCommerce applications, especially for smaller players. Customers expect responses within tens of miliseconds. These complex requirements can only be managed by understanding the architecture of a robust enterprise ecommerce website and its bottlenecks.
Many customers I speak with are looking towards the cloud. But more often than not, when I ask what cloud means to them, they respond by saying that they are looking to build their own private cloud infrastructure as opposed to outsourcing the entirety of their environment to a third party. Some infrastructure teams are experimenting with OpenStack or looking to vCloud Director to add on top of their current hypervisor. Within this blog today, I am going to focus on another option, System Center Virtual Machine Manager, and how VMTurbo is able to gain even greater insight into your Hyper-V environment which drives ever more control of your datacenter.
We are living in a world with a rising number of workloads that not only live within controlled, on-premises data centers, but also move out into the public cloud and even across multi-cloud environments. Even though this new world order presents a fantastic opportunity for increased agility and speed, a new set of challenges comes with the complexity of managing those workloads in and across a variety of public clouds. Just to name a few that I have been hearing from customers: Does my data need to be localized to a specific region in the world due to regulation? What virtual servers do my application consist of? Do they need to be in the same cloud? Where are my customers using my applications? What is the budget that I have allocated for my applications in the cloud?
One of the great things about rapid software release cycles, is the ability to constantly and rapidly tweak the overall product based on customer feedback. With the Turbonomic Operations Manager release of 5.5, we have done just that. The delivery of yet another quarterly release has yielded numerous benefits for our customers and one of the main areas of focus is delivering an even more user friendly user interface. Operations Manager 5.5 brings many enhancements to streamline how our customers leverage the product day in and day out. Those include, Maintenance ‘Black Out’ Periods, Action Restriction Window enhancements, streamlined Target Discovery, Plan View changes, and even more Dashboards.
As we embark into the New Year and the beginning of most companies’ fiscal year, sales teams are hopefully recovering from a hectic and successful close of the fourth quarter. They have signed up new customers and added revenue to the business. For managed service providers however, this means that the infrastructure teams’ heavy work is just beginning. Those new customers will need to be on boarded and deployed into their shared infrastructure. This can be a stressful time as certain deals unexpectedly close or future business is pushed further out into the future. As discussed in a previous blog post, VMTurbo adds tremendous value to service providers by minimizing latency and meeting SLAs within their virtual environment. In this post however, I will walk through the challenges in mapping the right amount of infrastructure to the increase in new customers, as well as how VMTurbo will help onboard those customers and seamlessly deploy these new workloads into the datacenter.
More and more organizations are moving towards a DevOps focused workflow for the deployment and management of their applications. The reason behind this trend is a desire to become increasingly agile and efficient while enabling the business to keep up with the rapid pace of change. The focus is now on the application, with an emphasis on rapid development and deployment, easy portability and quick scalability.
Recently there have been some eye-popping numbers released in regards to the tremendous public cloud growth and who specifically is leveraging the public cloud. At its most recent AWS:reinvent conference Amazon announced that revenue grew 81% year over year. That growth rate is also accelerating even as revenue increases. It is not just the fact that the public cloud revenue numbers are growing, it’s that the number of companies in the cloud is also growing. Nearly 57% of enterprises have workloads out in the cloud.
Challenge: Squeeze As Much as Possible from Infrastructure AND Deliver Service
I have recently been speaking with a number of service providers in the northeast region of the U.S. and heard again and again that the business is asking to drive up utilization as much as possible while still delivering a differentiated service. The IT staff is tasked with squeezing as much as they can out of their infrastructure to deliver services to their clients. For the business this keeps the costs down while acquiring new clients and growing the bottom line. But with this directive arises a number of challenges.