Consumer-class cloud services force IT to get aggressive with endpoint control or accept that sensitive data will be in the wind -- or take a new approach, such as reconsidering virtual desktops.

Jasmine McTigue, Principal, McTigue Analytics

July 8, 2014

3 Min Read

Download the entire July issue of InformationWeek's Tech Digest, distributed in an all-digital format (registration required).

Is VDI poised to bust out of niche status? For years, virtual desktops have been largely limited to spot deployments. End-users don't like VDI for a variety of (quite legitimate) reasons, not least connectivity and customization limitations. For IT, it burns through CPU cycles, storage, and bandwidth. It takes effort to set up a logical set of images and roles and stick to them. OS and software licensing can be a nightmare. And so on. But now, cloud- and mobility-driven security concerns plus some key technology and cost-avoidance advances mean it's time to take a fresh look.

Public cloud services such as Dropbox, Google Drive, and Hightail pose a thorny problem: How can IT effectively control regulated and sensitive data when each device with an Internet connection is a possible point of compromise? Improvements in policy-driven firewalls and UTM appliances help, but BYOD initiatives make enforcing controls nearly impossible.

Meanwhile, advances in solid state storage and plummeting thin client prices equal lower deployment costs, especially in greenfield scenarios. Couple VDI with advances in network virtualization and virtual machine administration, particularly on VMware-based VDI deployments, and IT can achieve fine-grained control of network connections and desktop configurations.

Finally, new Linux-based VDI approaches and open-source hypervisors offer an ultra-low-cost option for organizations with the right skill sets and application needs.

Virtualized desktops are also an increasingly attractive alternative to terminal-based application delivery methods, including Microsoft RDS and Citrix XenApp. Decision points on whether to switch include the fact that VDI offers complete desktops with significantly better resource encapsulation and session isolation. While Windows and Linux session-based application-serving technologies can sandbox resources to an extent, their resource isolation is incomplete compared with what today's hypervisors provide. That's important because application servers are vulnerable to performance degradation in the face of high resource demand, whether by users or underlying OS configuration or maintenance issues. In comparison, virtual desktop infrastructures are much less vulnerable to resource strangleholds and configuration flaws. Yes, they require more effort and expertise to maintain and cost more up front -- though not as much as you might expect.

Prices plummet
Many a VDI feasibility study has been derailed by costs associated with the storage architecture required to provision and sustain a pool of virtual desktops. Storage bottlenecks have been the historical bane of VDI, with poorly specified, undersized, I/O-limited infrastructures largely responsible for poor performance and long wait times to redeploy desktop pools with configuration changes (cue the end-user hatefest).

Major advances in solid state storage go a long way toward mitigating both the cost and performance impact of storage on VDI deployments. Enterprise virtualized storage systems and software-defined storage architectures, such as DataCore SANsymphony and VMware Virtual SAN, incorporate SSD and spinning disk storage into high-performance tiered architectures that intelligently place often-accessed data on SSD and provide cache services to spinning disks. Ben Goodman, VMware's lead technical evangelist for end-user experience, goes as far as to assert that Virtual SAN can save 25% to 30% over a typical virtual desktop deployment via reduced storage costs, a number we consider feasible with the right setup.

Regardless of your SAN or storage architecture, today's mixed solid-state/spinning-disk volumes are about half as expensive as the same I/O characteristics in pure spinning disk configurations, and IT is taking notice. More than half of the respondents to our 2014 State of Enterprise Storage Survey say automatic tiering of storage is in either pilot (22%) or limited (18%) or widespread (14%) use in their organizations. Further, that same survey showed healthy growth in the use of SSDs in disk arrays, from 32% in 2013 to 40% in 2014, highlighting the growth as falling prices bring SSDs in reach of most shops.

To read the rest of this story,
download the July issue of InformationWeek's Tech Digest distributed in an all-digital format (registration required).

 

About the Author(s)

Jasmine  McTigue

Principal, McTigue Analytics

Jasmine McTigue is principal and lead analyst of McTigue Analytics and an InformationWeek and Network Computing contributor, specializing in emergent technology, automation/orchestration, virtualization of the entire stack, and the conglomerate we call cloud. She also has experience in storage and programmatic integration.

 

Jasmine began writing computer programs in Basic on one of the first IBM PCs; by 14 she was building and selling PCs to family and friends while dreaming of becoming a professional hacker. After a stint as a small-business IT consultant, she moved into the ranks of enterprise IT, demonstrating a penchant for solving "impossible" problems in directory services, messaging, and systems integration. When virtualization changed the IT landscape, she embraced the technology as an obvious evolution of service delivery even before it attained mainstream status and has been on the cutting edge ever since. Her diverse experience includes system consolidation, ERP, integration, infrastructure, next-generation automation, and security and compliance initiatives in healthcare, public safety, municipal government, and the private sector.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights