Everyone is wondering why desktop virtualization has been slower to take off than server virtualization. The problem to my mind is that the first wave of desktop virtualization adopters treated it the same way they did server virtualization. However, the desktop is a totally different beast. They are very personal, very customized and with an infinite number of applications. Desktops pose support, security and manageability issues unknown on the server side.
This first wave of adopters were creating VMs in the data center, placing a thin client or some sort of device on the client side, and giving users access that way. They quickly realized this solution does not scale, nor does it yield the results they were expecting. Needless to say, it is also a very expensive strategy. This led to a wave of pundits declaring that desktop virtualization is not ready.
Another group of adopters took the VMware and Citrix approaches of streaming and linked clones. They got much further and earned better scalability, TCO and ROI.
A third group, the real geeky ones, understood that the problems in VDI are the applications and the user personalization, hence Unidesk.
Layering With A Twist
A new buzzword being thrown around today is "layering," the idea of separating certain components from the underlying operating system. For that matter, some might say that server virtualization is another form of layering, as it separates the hardware from the software that gets installed on it. Microsoft, Citrix and VMware can also say that they are doing layering with their application virtualization offerings, as they are laying the applications on the operating system without installing them, or modifying the operating system at all. They are right, they are layering on top of the operating system, and therein lies the problem ... for years we have been trying to solve the issue of application compatibility, conflicts, isolation, and we have come up with great ideas and solutions but all on top of the operating system.
Windows, today, is an OS that allows everything to be directly installed on it. The consequences of that are that applications, drivers and other software affect each other. A good application can be affected and slowed down due to the presence of a bad application. A driver can crash the operating system. Add to all of that the fact that repairing Windows is difficul, aside from reimaging the machine back to its gold state, and then restoring the data. Some of us image their machines every six months to start fresh and improve performance.
Compare that with the very siloed and isolated approach we take with servers. Typically, each server (physical or virtual) is dedicated to an application; therefore, we eliminate the issues and stabilize the operating system. A good workaround for servers, but for desktop that would mean giving users a desktop for each application they want to use, clearly not a practical approach. All this being said, you can see how Windows is the weak link. Unidesk is taking a shot at fixing windows itself.
Unidesk's Composite Virtualization addresses this issue by isolating everything in separate containers or layers. The operating system will always be in its own layer in read-only format, so it will never be modified. Applications are also separated in their own containers, and so is the user personalization and data. The best part of isolating these components is the fact that you can do snapshots at the personalization layer, say every 24 hours. So let's imagine a situation where the CFO somehow corrupted Excel; with the click of a button, you can roll back the personalization layer to a different point in time when Office was working and thereby repair that particular application. For a file, you can snapshot back to an earlier point in time where the file was not corrupted; all this without affecting the different layers, applications or other data.
Unidesk will allow for a self-service, self-healing portal that a user can access and repair her desktop, taking it back to a point in time when it was working. This rollback ability is available with most VDI packages today. What is not available is the ability to roll back in time at the personalization-layer level, or to give the user aself-service portal to do so, which is what differentiates Unidesk.
How Does It Work?
Unidesk's offering is currently VDI-only, which means it requires a hypervisor of some sort to function. Composite Virtualization operates on top of the hypervisor, but below the operating system. You first build your Windows gold image the way you are used to, and configure it to your liking. Once it is ready, you install the Unidesk converter, which will then import the gold image and inject the Unidesk driver under the file system. The imported gold image is then moved to the CacheCloud appliance, where it is stored. The CacheCloud appliance is the storage point that hosts all the virtual disks.
Once you have imported your gold image, any additional application that the user installs, or IT provisions, will be layered in its own isolated container. These layers will represent the different files and registry changes that this new application introduces. The power of Unidesk lies in the fact that while these files and registry settings are isolated and stored separately, they are still merged and presented to the OS as if they were locally installed. Now that's pretty cool. In addition, the layers that IT creates can also be versioned for easier management and support.
When a VM is created, it gets a very small virtual disk that contains the Windows page file and some necessary boot files needed to get this VM connected to the CacheCloud appliance. The CacheCloud then maps the necessary virtual disks to this VM, composites them together, and presents them as a single entity to the operating system. Think of this boot process as booting to SAN, instead here you are booting to virtual disks on the CacheCloud appliance.
One of the biggest hurdles for the adoption of VDI has been the storage costs associated with the initial build out and maintenance of the environment. Many software makers have tried to address this issue ... Citrix has its provisioning server, VMware its linked clones and others have deduplication solutions that at first glance sound very interesting. However, if you take a step back and think about it logically, if you can solve the duplication issue then you would not need to deduplicate. Citrix and VMware were both on the right track with the provisioning server and linked clones; they allow many users to share a single instance of an operating system, thereby reducing storage costs. The problem with both of these approaches is that you lose personalization, and the only thing you can really retain are profile changes.
VMware and Citrix are still taking Windows the way it is and applying technologies around it. Everyone is working around Windows, on top of Windows, but the problem is Windows itself.
By layering applications, user data and the operating system, you can then leverage the CacheCloud Appliance as a storage point to boot hundreds of VMs. This allows you to maintain the flexibility and control that we have been talking aboute. Now, all of a sudden VDI is not that expensive, is flexible and achievable.
The WAN, the WAN, the WAN, there is no escaping the WAN. How do you support VDI across the WAN? There is a caveat which I don't think we will ever be able to get around, you will have to pay an initial replication tax, which means in one way or another you have to get the initial files to the remote location. The ideal design would see an appliance deployed at every site you have an infrastructure or you intend on providing VDI. That CacheCloud will then serve as the storage point for the VDI instances in that site. Assume your HQ is in Chicago but you have a VDI presence in London, and you have deployed the CacheCloud appliance there but you need to make a change, perhaps an application or OS upgrade. Typically that would mean you have to move the entire image to this remote site, thereby paying that replication tax each time.
Unidesk's approach is smarter, it allows you to replicate at the block level and only replicate those bits that have changed, thereby significantly reducing update and replication times.
The offline use case has most definitely been one of the biggest hurdles that has stood in the face of server-based computing adoption, even before VDI, going all the way back to Terminal Server. Users in today's fast-paced world need to be able to access their stuff offline.
Unidesk will leverage the type 1 client hypervisors that are available from a number of companies, including Citrix (already released beta) and VMware (soon). What Unidesk will do is deploy that same CacheCloud Appliance to the type 1 hypervisor and serve up VMs locally.
The best part of this approach is, again, the WAN friendliness. We can now update these VMs without having to push an entire image down to the user, just send the bits that have changed. Unidesk's technology finally allows us to harness the power of a centralized environment without losing the flexibility and customization that users expect in a desktop environment.
Going forward, I expect that Unidesk will be a tempting acquisition target. Microsoft would be my bet because a Unidesk acquisition would give it a technology that it can extend into its operating system, transforming Windows into a very dynamic and powerful OS of the future. Citrix could also make a strategic move here to enhance its position in the desktop virtualization race, and VMware is always a possibility to up its VDI cred. We'll be watching.
Elias Khnaser is the practice manager for virtualization and cloud computing at Artemis Technology, a vendor-neutral integrator focused on aligning business and IT.