How To Solve 2 VDI Performance Challenges

Here's how to fix storage performance problems that creep up after you consolidate hundreds of desktops onto a single host.

George Crump, President, Storage Switzerland

August 7, 2012

3 Min Read
InformationWeek logo in a gray background | InformationWeek

In a recent column, Overcome Cost Challenges Of VDI, we looked at cost, the key roadblock to a virtual desktop infrastructure (VDI) project. If you can't justify the cost of the investment, then all the other issues are moot. The good news is that many storage systems have implemented cost-saving techniques that allow the VDI justification process to move to the next step: How to deal with the performance issues that the environment can cause.

It seems odd that storage performance for the VDI is a problem. After all, most users' desktops have very modest performance demands. But it is the consolidation of potentially hundreds--if not thousands--of desktops onto a single host that causes the problem. While each may only need modest performance, the combined random storage I/O of so many users can put a storage system to the test. Again, as we mentioned in our recent column, that performance has to be delivered cost effectively.

Beyond the need to provide consistent performance to all of these desktops, there are two specific situations in the VDI use-case that storage infrastructures need to prepare for. First there are boot storms, or what occurs to the VDI when users all arrive at work and start the login process at about the same time. The storage system gets flooded with these requests and may end up so saturated that it can take five to 10 minutes before users' desktops are ready to go.

Much of the storage industry gets stuck on the boot storm issue and focuses solely on solving that issue. As we discuss in our recent article VDI Storage Performance Is More Than Just Boot Storms, there is an equally important second problem; the amount of write I/O that storage systems supporting VDI need to deal with. There is the standard write I/O that a user's desktop will create, but multiplied by 1,000 in the VDI. And there is also the write impact caused by the heavy use of thin-provisioned masters and clones. These are the cost-saving techniques that we described in our last column--and write I/O performance is their downside.

Thin provisioning, with masters and clones being used each time a new piece of data has to be written to a virtual desktop, has to have additional capacity and needs to be prepared for the desktop file system, before the data can finally be written. The combination of all of these steps, again multiplied by thousands of desktops, can lead to latency that will impact user performance and inhibit user acceptance of the VDI project. Hypervisor file systems are particularly inefficient at managing these dynamic write-allocation issues.

The almost universal solution is to leverage solid state devices (SSD) to alleviate both issues. The problem is: Which SSD implementation method should you use? There are tiering/caching approaches that automatically move the blocks of data needed for virtual desktop boot to an SSD tier, but some of these solutions don't assist write I/O performance. There are the flash-only array solutions that address both read and write traffic, but the cost premium needs to be dealt with.

Instead of throwing hardware at the problem it may be better to address the root cause, the hypervisor's file system. As we discuss in our article How To Afford SSD for VDI, fixing the file system first may reduce the amount of SSD required for VDI performance demands.

The performance challenges that VDI creates for the storage infrastructure can be overcome. The key is to overcome them as cost effectively as possible. Leveraging solutions that deliver the right mix of storage efficiency techniques and the right amount of high performance SSD can deliver the performance/cost balance needed to make the VDI project successful. In our final column in this series we will detail how storage systems should be designed to help you strike that balance. Follow Storage Switzerland on Twitter George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.

About the Author

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights