I wrote an article last year regarding the challenges of price control in Azure Cloud that outlined a singular, but complex, challenge that enterprises face in adopting a single cloud provider. As I speak with more customers this year, one thing is clear however: Multi-cloud is the future…. And with so many nuances to each provider in terms of price, performance, security, reliability, it’s no wonder organizations are falling into the same trap we fell into when virtualization first began
Last year was the year of the Openstack. People talked about it, they deployed it, they tested it, and they had grandiose plans on rolling it into production. Few in my experience succeeded on schedule or without technical challenges and difficulties in skill-set gaps on their teams. While this is still being leveraged in many of my customer accounts; I am finding more and more people gravitating to alternatives like Azure, AWS and for elastic compute and development project needs.
The shift from rule-based automation management
As we approach 2016, it is interesting to consider the evolution the virtualization has taken us on, and where it will lead us to into the New Year. For me personally, I noticed one very big shift in virtualization management this year. The change is two-fold.
We left off our last article with a look at the trend of microservices, and how organizations are looking towards embracing this new approach. In this stage of the adoption curve, there are more companies investigating this more modular way of deployment.
When I think about the conversations around IT strategy that I was having with people in the community last year compared to those this year, I realize now that plans around “virtualized everything” have taken a whole new meaning in the enterprise.
VMTurbo’ s Journey
VMTurbo’s ability to control virtualized environments in a constant state of automated service assurance has been at the center of the platform’s value proposition since its inception. By creating an economic abstraction of the datacenter that automatically brokers workload demands to the underlying supplies of resources, VMTurbo enables organizations to assure application performance while simultaneously increasing efficiency across multiple layers of the IT Stack.
In my previous blog we discussed the idea of controlling workload demand through Live Migration capability on OpenStack. Let’s extend this discussion by incorporating an additional lever that humans can use to control performance while increasing efficiency: Sizing workloads to OpenStack Flavors. Before workloads are provisioned into the environment, users will need to build nova flavors that define the custom specifications for those VMs. OpenStack offers flexibility in how to create, adjust, and remove these flavor specs as seen below
In our previous article in the series, we discussed challenges associated with managing storage capacity for PACS inside a virtual environment. While utilization of storage capacity remains one of the most difficult tasks due to its cost and compliance implications for the hospital, we also need to think about how our users are actually going to access the capacity itself when requiring patient information and follow-ups.
We keep hearing more and more OpenStack arrangements lingering in the background of our customer’s overall go forward strategy. I think to myself: who can blame them? The allure of OpenStack (aside from being free….) revolves much around a modular approach to project development and open source community coming together for the common goal of saving the world from expensive licensing costs and vendor lock in. It’s a great pie in the sky that offers the innovators of the virtualization age the opportunity to create a truly flexible Infrastructure-as-a-Service (IaaS) platform.
The PACS Storage Capacity Management Challenge
So what trade-offs do healthcare organizations need to make today in regards to just storage capacity for PACS workloads and anticipated growth projections?