We're seeing a significant climb in VDI implementation as old storage performance and cost roadblocks fall.

George Crump, President, Storage Switzerland

March 8, 2013

3 Min Read

While virtual desktop infrastructure (VDI) projects have been steadily increasing over the last few years, 2013 is seeing a significant spike upward in terms of both interest and actual implementation.

There are two reasons for this. First, the cost of the client has come down while its capabilities have gone up. This is thanks to thin clients, tablets and thin laptops.

The other reason for the increase in adoption is that storage roadblocks to successful adoption, like performance and cost, are quickly being minimized -- if not eliminated all together. In fact, a case could be made that the removal of these storage roadblocks is the top reason behind the increased adoption of VDI.

One of the keys to VDI adoption, outside of the classic call center use case, has been the use of persistent desktops. This type of virtual desktop allows for users to customize their virtual instance like they used to be able to customize their physical system. It also allows them to add their own applications and utilities.

[ Looking to update storage? See Why Flash Storage Excels In Virtual Environments. ]

The problem with persistent desktops is that, as a default, they require hard storage allocation, which of course is expensive and ruins much of the VDI return on investment (ROI). Most hypervisors have a method of dealing with this cost challenge by leveraging a common VDI image and then cloning it for each user. Each clone stores only the unique changes that each user makes and leverages the golden master for areas that desktops have in common, like operating system, core applications, etc. If this technique is used to its fullest extent, the cost challenge of storage in VDI is largely solved.

The problem with the golden master and linked clone approach is that each write has to be dynamically allocated as the user is making changes to the persistent desktop instance. If you think about your use of a desktop, once you've booted the system, much of your work is creating or editing data. Also the user storage performance expectation is well beyond what the conventional wisdom suggests we budget for.

While as a percentage there may not be more write traffic than read, the write traffic is quite high and each write has to be dynamically allocated to storage. Essentially writes are harder in a virtual world and, as we discussed in our article "Using Network Caching To Solve VDI Storage Problems," most storage systems give priority to write requests over read requests, so even if writes are lower as a percentage, their complexity impacts read performance.

The initial response to this storage-performance challenge was to add solid-state storage to the system and move much (if not all) of the VDI environment to it. This is really a sledgehammer approach to the problem, since the entire VDI environment does not need to be on SSD all the time. Lately we have seen technologies that lead to a more balanced approach that can keep costs in line while delivering the performance that users have become accustomed to.

These solutions range from using high amounts of DRAM in the VDI host, to using cost-effective deduplicated SSD appliances and arrays, to leveraging caching technologies in the server, network or storage system. My next entry will take a look at each of these to help you determine which might be the best for your environment.

Attend Interop Las Vegas May 6-10 and learn the emerging trends in information risk management and security. Use Priority Code MPIWK by March 22 to save an additional $200 off the early bird discount on All Access and Conference Passes. Join us in Las Vegas for access to 125+ workshops and conference classes, 300+ exhibiting companies, and the latest technology. Register today!

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights