Here's how to fix storage performance problems that creep up after you consolidate hundreds of desktops onto a single host.
In a recent column, Overcome Cost Challenges Of VDI, we looked at cost, the key roadblock to a virtual desktop infrastructure (VDI) project. If you can't justify the cost of the investment, then all the other issues are moot. The good news is that many storage systems have implemented cost-saving techniques that allow the VDI justification process to move to the next step: How to deal with the performance issues that the environment can cause.
It seems odd that storage performance for the VDI is a problem. After all, most users' desktops have very modest performance demands. But it is the consolidation of potentially hundreds--if not thousands--of desktops onto a single host that causes the problem. While each may only need modest performance, the combined random storage I/O of so many users can put a storage system to the test. Again, as we mentioned in our recent column, that performance has to be delivered cost effectively.
Beyond the need to provide consistent performance to all of these desktops, there are two specific situations in the VDI use-case that storage infrastructures need to prepare for. First there are boot storms, or what occurs to the VDI when users all arrive at work and start the login process at about the same time. The storage system gets flooded with these requests and may end up so saturated that it can take five to 10 minutes before users' desktops are ready to go.
Much of the storage industry gets stuck on the boot storm issue and focuses solely on solving that issue. As we discuss in our recent article VDI Storage Performance Is More Than Just Boot Storms, there is an equally important second problem; the amount of write I/O that storage systems supporting VDI need to deal with. There is the standard write I/O that a user's desktop will create, but multiplied by 1,000 in the VDI. And there is also the write impact caused by the heavy use of thin-provisioned masters and clones. These are the cost-saving techniques that we described in our last column--and write I/O performance is their downside.
Thin provisioning, with masters and clones being used each time a new piece of data has to be written to a virtual desktop, has to have additional capacity and needs to be prepared for the desktop file system, before the data can finally be written. The combination of all of these steps, again multiplied by thousands of desktops, can lead to latency that will impact user performance and inhibit user acceptance of the VDI project. Hypervisor file systems are particularly inefficient at managing these dynamic write-allocation issues.
The almost universal solution is to leverage solid state devices (SSD) to alleviate both issues. The problem is: Which SSD implementation method should you use? There are tiering/caching approaches that automatically move the blocks of data needed for virtual desktop boot to an SSD tier, but some of these solutions don't assist write I/O performance. There are the flash-only array solutions that address both read and write traffic, but the cost premium needs to be dealt with.
Instead of throwing hardware at the problem it may be better to address the root cause, the hypervisor's file system. As we discuss in our article How To Afford SSD for VDI, fixing the file system first may reduce the amount of SSD required for VDI performance demands.
The performance challenges that VDI creates for the storage infrastructure can be overcome. The key is to overcome them as cost effectively as possible. Leveraging solutions that deliver the right mix of storage efficiency techniques and the right amount of high performance SSD can deliver the performance/cost balance needed to make the VDI project successful. In our final column in this series we will detail how storage systems should be designed to help you strike that balance.
Follow Storage Switzerland on Twitter
Google in the Enterprise SurveyThere's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity products, and 69 percent cite Google Apps' good or excellent mobility. But progress could still stall: 59 percent of nonusers distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
Top IT Trends to Watch in Financial ServicesIT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Join us for a roundup of the top stories on InformationWeek.com for the week of October 9, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week to get the "story behind the story."