How Server-Side Storage Memory Impacts VDI Costs
In my last column I posited that advances in storage technology -- mostly innovative use of memory-based storage -- is making virtual desktop infrastructure (VDI) projects more likely to generate a return on investment beyond just an operational one.
Moving beyond operational VDI project justification is critical for the large-scale deployment of VDI projects. It is simply easier to justify to non-IT decision makers something that will save the organization dollars than it is to rationalize something that will save IT department time or increase security.
- Maximize the benefits of virtualization for greater ROI
- Get Actionable Insight with Security Intelligence for Mainframe Environments
White PapersMore >>
I see three key areas where flash and DRAM (as storage) are being used to significantly increase virtual desktop density (which saves money and improves user acceptance by increasing performance): server-side storage memory, network caching and shared SSD appliances/arrays. In this column I'll discuss server-side storage memory, and I'll cover the other methods later.
I'm avoiding the use of server-side flash intentionally. Much of the innovation we are seeing involves the use of DRAM as the first tier of caching for virtual desktop images. VDI typically has a very mixed read/write workload, and because DRAM is ideal for writes it is a perfect complement to VDI.
[ For more on VDI and storage solutions, read Is Storage Saving Virtual Desktop Infrastructure? ]
Thanks to the capacity-savings capabilities of the hypervisors, thousands of persistent desktop images can be stored in a very small storage space, which overcomes DRAM's cost challenges. But these capacity-saving techniques typically have a high level of latency caused by their need to dynamically allocate writes. DRAM's aforementioned write performance capability overcomes the write performance penalty of the capacity-saving techniques. Further, some products perform compression and/or deduplication in the RAM cache space itself, making RAM utilization even more efficient.
The challenge with DRAM is its volatility. To maintain performance, these products must cache both reads and writes, which risks data loss until the write is flushed to permanent storage. This may be a generally acceptable risk since this is desktop data, but some users will likely push back.
One solution is a non-volatile DRAM solution, as discussed in this article. A more common approach for users who don't want to take that risk is server-side flash, in either in PCIe or SSD form. As we'll discuss in the upcoming webinar Is PCIe Dead?, while PCIe is considered the performance leader, drive form factor SSDs are gaining ground and certainly have a cost advantage.
Challenges to Server-Side Storage Memory
There are several challenges to server-side storage memory. First, it inherits all the challenges of any directly attached storage device. Data protection like RAID is not typically built in as it is on a shared storage system, and the capacity of the SSD or DRAM is isolated to the server in which it is installed. It might be too big or too small, and it can't be easily allocated to other servers. As mentioned above, however, the actual capacity needs per server should be relatively small, so this may not be a significant issue.
The greater challenge is that VDI mobility is hampered. More software caching solutions are now integrated with the hypervisor so that they know to evict the cache prior to a VM migration. But this means that the Virtual Desktop may see decreased performance until its unique data is re-cached on the new server. How big a problem this is depends largely on how often you migrate virtual desktops between hosts.
In my next column I'll discuss network-based caches and shared flash arrays, both of which overcome the challenges discussed here, but with the added cost of an appliance and/or storage system. They also, of course, add the potential latency of the network. I'll also provide some guidance on how to choose between the three options.
Attend Interop Las Vegas May 6-10 and learn the emerging trends in information risk management and security. Use Priority Code MPIWK by March 22 to save an additional $200 off the early bird discount on All Access and Conference Passes. Join us in Las Vegas for access to 125+ workshops and conference classes, 300+ exhibiting companies, and the latest technology. Register today!