A recent study highlighted that in the next three years, the adoption of VDI by healthcare organizations will grow by 150%. Many hospitals and healthcare organizations have already realized the benefits of server virtualization and are taking the next leap forward by virtualizing desktops as well. The cost savings and agility provided by a centralized management platform for doctors and staff allow medical care to be delivered faster and with greater efficiency, while also enabling faster adoption of EMR/EHR, enhancing workforce mobility, and supporting the increased scale at which global healthcare organizations are now operating. It seems obvious why many healthcare organizations are moving in this direction, however VDI comes with its own set of unique challenges—especially in the healthcare industry—and there are several key tips to consider throughout the transition.
Topics: Industry Perspectives
Paradigms and ideologies tend to shift very quickly in IT these days and keeping the business agile is considerably one of the most important components of a successful company. The faster a company can adopt, test and release the newest applications, the larger its advantages are in the market. IT Agility is most likely the reason why “DevOps” is such a big buzzword.
Recently introduced in vSphere 6.0, Virtual Volumes (more commonly referred to as VVols) provides SAN administrators a more refined and granular approach to storage management. VVols can make SAN administration and VM deployments much faster and similar, if managed correctly. Traditionally, administrators construct datastores within vSphere on top of LUNs which map VM to storage pools. Since shared resources are distributed equally among all virtual machines per LUN, applications that reside on the same LUN mapping have the same storage performance. This presents problems when defining SLAs or QoS on a per VM basis (everything must be defined on the LUN/storage pool). For example, you probably don’t want the same storage policies defined for Microsoft SQL servers as your file and print systems.
Undoubtedly, one of the most widely used virtualization features is the clustering of physical servers. While the attractiveness of failover features from clustering servers helps as an airbag during a failure, I often find that these cluster boundaries actually limit the overall environment’s efficiency or lead to unnecessary capital spend. The tradeoff is simple actually, the more barriers and restrictions you put on an infrastructure (think N+1, affinity rules, clusters, etc.) the less resource availability an environment has.
Life Was Simpler Back Then…
Remember back in the early 2000’s when virtualization first emerged as a financially scalable solution and most IT shops only virtualized a small fraction of their workloads? It was much easier back then, we had less virtualized components to keep track of which meant a smaller range of decisions and opinions that could potentially exist. But as we bridge the gap between then and now, not only are we virtualizing on average of 80% of our workloads but the wide arrange of new technology and platforms have sparked new paradigms and ways of manage cost, risk and complexity in our environments. It has become very clear that virtualized infrastructures have become much more volatile, and managing the volatility is increasingly difficult.
Undoubtedly, flash storage is one of the most expensive commodities we can buy for our data centers. As a result, most companies are forced into a hybrid model where they run a combination of disk-based and flash storage. Notably, as the enterprise adoption of flash increases, the continuous prioritization of workloads on flash vs. disk is crucial to both performance and efficiency. In other words, workloads that demand a lot of IOPS should be able to access flash storage, while more idle workloads can do with disk. The challenge is ensuring the specific storage demands of these workloads are met continuously and in real time.
Originally introduced in Windows Server 2012, Microsoft’s file system SMB 3.0 has become the standard way MS windows systems share files and folders. Microsoft SMB 3.0 introduced the ability to support hyper-v virtual machines and sql server databases. Rather than identifying where a VM lives by a drive letter and directory SMB features a UNC (Universal Naming Convention) path. This allows greater scalability and organizational management of file shares across scale-out storage arrays like Netapp and EMC. SMB 3.0 improves on its predecessor with many new features, but I find the top three are improvements in speed, introduction of fault tolerance, and support for live VM’s on a file server. In fact, compared to cluster shared volumes, SMB’s block based counter-part, Microsoft SMB 3.0 is much cheaper and requires less configurations on the hypervisor.
The push toward virtualizing bigger and more mission critical applications in today’s IT environments is emphasizing the importance of maintaining high availability. After all, if mission critical applications go down it could range from a loss of revenue to a sudden halt in business productivity depending on the workloads. Thankfully, most hypervisors support HA features that allow us to create rules or impose limitations on resource utilization so that in the event of a failure, there exists spare capacity. While HA is great for uptime, maintaining it is a difficult problem.
Microsoft’s virtualization platform Hyper-V has evolved significantly over the years to adapt to emerging market trends to change what we manage. Despite these advancements, little has changed in how we manage an environment. While Microsoft has done a great job at creating a more attractive hypervisor with new bells and whistles, little has changed about the mode of operations and Hyper-V management.