One question many civil engineers receive is, “what is the proper ratio of the depth of a building’s foundation to the height of the building?” Many claim that the safest buildings have a foundation depth that is identical to the height of the building, while others say a depth one-third of the total height is safe enough. Technically speaking, neither assumption is correct. Civil engineers calculate their best estimate based on external factors, such as the soil that will sit underneath the finished structure, and must account for the foreseeable pressure that the structure will exert upon the ground.
Rankine’s Theory, or the theory of maximum-normal stress calculates the pressure on the earth by predicting the active and passive pressure coefficients. The theory essentially assumes that maximum failure will occur when the maximum principal stress reaches a point equal to the tensile stress. Tensile stress is when the volume of a material stays constant while equal and opposite forces are applied to the structure, leading to an increase in length in the tensile direction (i.e. being stretched). The calculation of the active pressure, the building, against the passive pressure, the soil, can be simplified to be an inverse equation to each other… so how does this relate to our Virtual Desktop Infrastructure?
When architecting a VDI implementation, the demand, or “height” of our VDI implementation is the first calculation. Now that we are virtualizing desktops, how many seats do we want to virtualize? This is the active pressure coefficient in our calculation. In the simplest form of Rankine’s Theory, this is all the information we need. We then just need to calculate how much storage, compute and networking are required to support the set number of seats, and, presto – desktop virtualization is easier than we thought!
Unfortunately, measuring the needs of a VDI deployment is not as easy or constant as measuring the height of a building. To appropriately forecast VDI demand, we need to calculate not only the resources that make up VDI seats, but also the configurations we include within our implementation, on an ongoing basis.
In this post, we discuss the architectural differences in virtual desktop persistence, and dissect the inherent differences in each.
Persistent vs Non-Persistent VDI: The Difference
Persistent desktops are a form of desktop virtualization where end-users are able to maintain their personalized settings, store data and configure their instance so that their specific desktop is retrievable each time the end-user logs in. Non-persistent desktops, on the other hand, are stateless desktops where the end-user is unable to retain data and configure an instance as the desktop is destroyed at the end of the session.
One of the many benefits of desktop virtualization is centralized management, including configuration, updates, and security. To simplify this management, VDI providers devised the concept of “golden images” or “master images” depending on whether you choose VMware or Citrix, respectively, as your VDI provider. Regardless of the name, both serve the same purpose. The administrator creates a disk image capable of being duplicated an infinite number of times to deploy a catalog, or pool, of desktops. Each individual copy of the parent disk image, is called a “linked clone” (a term typically associated with VMWare VDI Technology), or most recently an “instant linked clone”. The difference between the two lies in the time it takes to deploy the child VM. With “instant linked clones” the average time it takes to deploy a desktop decreases from 10 seconds to 2. How VMWare was able to significantly decrease desktop deployment time was by creating an “Instant Clone Technology” dubbed “vmFork”. This technology utilizes rapid in-memory cloning and “Copy-on-write” optimization, which isolates changes to shared information across the pool and parent VM.
This pool of desktops is classified as non-persistent. The reason behind this naming convention is because as easy as it is to deploy copies of a single VDI template, the shut-down and destruction of the VMs are just as easy. This is what is known as the many:1 ratio of desktop virtualization.
Persistent desktops are created from the same template mechanism, but differ from non-persistent desktops since they have a unique disk image moving forward. This is conversely known as the 1:1 ratio of desktop virtualization.
When to Use Persistent Desktops
With persistent VDI, each seat has its own distinctive disk image, and the end-user can have an experience that is closer to what they have been accustomed to; by being able to customize their own virtual desktop and store their information with the benefit of accessing the contents of their personal desktop from any device, anywhere and at any time.
In a persistent architecture, the administrator still configures and defines the specifics of the basic image, but the end-user is granted relative freedom to customize the desktop once he or she is granted credentials.
Storage for persistent desktops is usually a separate logical drive, with user data being stored on the virtual desktop. Therefore, persistent desktops are often preferred by end-users who have sensitive data that needs higher levels of security. By giving end-users the ability to access their unique virtual seat, and store personal and historical information on it, each persistent VDI desktop could become a greater storage consumer than a pool of non-persistent VDI desktops. Persistent desktops overall are perceived as a better end-user solution, but are more laborious for the VDI environment managers to architect and manage on a day-to-day basis.
When to Use Non-Persistent Desktops
The employment of master or golden images is a saving grace for the technical owner of the VDI environment. End-user IT management has now been simplified in a way where each specific end-user can be serviced as a momentary application, instead of a unique consumer of a variety of complex applications and IT resources. Once the master image has been duplicated, each copy can will behave like a brand new desktop and can be managed for master changes such as patching and general application updates by the engineer who is now responsible for the VDI environment. Managing hundreds or thousands of non-persistent desktops is literally just like managing one: the master image.
Therefore, if we have a largely homogenous end-user base that has rudimentary desktop needs, non-persistent desktops are preferred, if image-based management is possible.
- Call Centers
- Static Labs
The stateless approach is also seemingly more cost effective. Its configuration allows for end-user data to be separated from the OS, enabling both data types to be treated independently, allotting the opportunity for end-user data to be placed in lower-cost storage hardware.
So how do we manage?
Reverting back to the Rankine Theory, we have identified the sources of passive pressure - storage, compute, networking - needed to support our VDI deployment and can now begin thinking about the force of active pressure on the environment. All we need is to calculate is how many image clones we plan on deploying and make sure that we can support the planned number of VDI seats. Right?
If we have a solely non-persistent VDI environment, then we can easily calculate the storage, compute and networking needs to account for the maximum number of active pressure or active disk image clones desired in the VDI environment, if they were all “logged on” at the same time. This basic calculation wouldn’t be inaccurate, just extremely conservative – therefore very costly. What VDI engineers then do, is create an assumption about what is the foreseeable average number of non-persistent desktops that will be active at a single point in time, and set that as their maximum threshold. While this assumption is more cost-sensitive, random spikes in activity can still leave our environment at risk. If we only have persistent virtual desktops in our VDI implementation, the formula is virtually the same but also involves making storage usage assumptions.
Each option presents certain advantages and disadvantages depending on the use case. This is primarily due to the fact that IT environments are not always homogenous – reflective of the business use cases they support.
Therefore, VDI environments are now typically a mix of persistent and non-persistent desktops, designating a type and specific image for each distinct pool of non-persistent desktops (the finance department which needs QuickBooks in their disk image would be a separate pool from the marketing department that requires Photoshop, for example), and further, separate disk images for each specific persistent desktop. What results is a concept called “Image Sprawl” where the benefits of VDI management now begin to diminish and VDI environments have tens to hundreds of distinct disk images.
To try and keep image sprawl to a minimum, VDI engineers have started to implement a practice called “layering.” Application layering is when a basic disk image is used for the majority of the end-user pool and then additional applications are installed into separate .vhd or. vmdk containers for example, which are then layered or added to subgroups. The logic is that each layer can then be managed independently, but still have a universal impact on the desired specific pools.
Layering and mixed environments start to get us closer to the solution. However, it seems that in regards to virtual desktops, Rankine’s theory has failed us. If we we’re still architecting a building, we are faced with a project that is more resembling to a game of Tetris rather than a skyscraper; where stories spontaneously appear and disappear and floors fill up with items of varying and unpredictable mass.
The crux of the problem is not the massive increase in your IT budget to account for storage and resource costs, nor the required VDI specialization and advanced management tools one needs to implement tactics such as layering. What makes the persistent and non-persistent VDI question so arduous is the “unknown.”
Tactile resources aside, with persistent and non-persistent VDI implementation, access is the real unknown coefficient in our calculation. By allowing end-users to access to their VDI instances whenever they need to, we have created a mass-scale highway of information traffic. As users sign on and off, VMs get deployed and destroyed, applications get “pulled” from the main data-center, and IOPs increase at an exponential rate, leading to end-users experiencing levels of latency that are crippling to the health of the entire data center. VDI environments are one of the most I/O-intensive systems one can implement within their IT infrastructure, and desktop consolidation creates highly unpredictable workload profiles, making it challenging to estimate for I/O demands.
Even if there were unlimited IOPS available in the environment, 100 distinct disk images downloading a patch then turns into 100 distinct delivery agents churning vCPUs, and 100 post-installation and reboots concurrently hitting their boot sectors… As is evident, instantaneous remote desktop accessibility is the true active pressure in our VDI deployment – and it is highly unpredictable.
Since our independent variable is constantly changing, the best way to calculate the “line of best fit” could be to predict the end-user’s behavior and set conservative resource over-allocations, or it could be to actively manage the relationship between the active and passive pressures in our VDI environment.
To have a successful VDI deployment, one does need to have a sufficient resource pool, and luckily our VDI providers are constantly innovating to offer us the most advanced solutions. However, active resource allocation based on current IT environment demand is the best practice to ensure a successful and effective VDI deployment. Only by heeding Rankine’s Theory in real-time can we build to last.