Virtualization has created its share of heroes in IT. A few years back, after a division of a Fortune 1000 financial company virtualized 800 servers over a 12-month period, the IT team was bathed in ROI glory: lower cooling costs, a smaller footprint, and cost savings on capital equipment and licensing. Life was good.
Fast forward to last year. Those 800 virtual servers had ballooned to 1,000, with no clear picture of the overall configuration, no tracking of the life cycle of each virtual machine, nor any record of how they grew so rapidly. A review of the team's procedures found a complete breakdown in protocol for provisioning computing power, blamed on a lack of management tools and poor organization. The team had to bring in consultants to help get the operation back in order.
Welcome to the back side of the virtualization wave.
It's rare to find anyone in IT who doesn't acknowledge the benefits of server virtualization. The core concept, of leveraging the power of the underlying hardware by logically provisioning the system into multiple virtual sessions, has existed since the age of the mainframe. But it was VMware's success virtualizing x86 servers that catapulted this technology approach into the distributed world. Today, you name it, and we'll try to virtualize it. iPhones running a hypervisor? Seriously. Why not?
Server virtualization is hurtling along--54% of the 391 business technology pros we surveyed expect at least half of their production servers will be virtualized in 2011. Companies are beginning to consider desktop, storage, and even infrastructure virtualization. Yet most IT operations don't have the management to effectively support server virtualization, let alone a big push into new areas.
More than half of the survey respondents who've embraced virtualization rely on the built-in tool provided by their hypervisor vendor, whether its VMware, Citrix, Microsoft, or someone else. This leaves them with two sets of tools to manage--one for the physical servers and one for the virtual environment. Only 10% of organizations have invested the time and money to implement a server management system that provides a single framework. The rest either use legacy tools that don't adequately handle a virtual environment, or they're doing nothing at all.
They're in for a rude awakening. Most of the survey respondents have plans or interest in expanding virtualization to desktops and storage. Server virtualization was relatively easy, since it typically involves just the core data center and server teams. Adding desktops and broader storage options brings in entirely different groups that typically have arm's-length cooperation at best. Flatten out your computing through virtualization and you'll find challenges to your traditional organizational structure at every level, from security and access to staffing and training requests.
Will today's virtualization heroes fall, done in by poor management of the sprawling VM farms that brought them glory? It doesn't have to happen.
Management Is More Than Tools
Virtualization management requires a lot of elements you should be doing already and shines a bright light on any infrastructure management shortcomings. If your organization was poorly run before virtualization, virtualizing likely will make it worse. For example, a large Connecticut municipality became increasingly concerned about potential problems with its virtualization plan in the event of host hardware failure. Funny thing is, the municipality's existing contingency plan for hardware failure had neither an inventory of spare servers nor any continuous data protection.
Virtualization changes slightly almost every aspect of your operational framework by removing or reducing the physical link between hardware and software. Sometimes that's obvious, such as not having to ask for a space in the data center to add a server. But some are subtle enough to slip past even sharp IT operations. Virtualization has redefined failover plans and ushered in a new model for disaster recovery and high availability, but one of the biggest oversights we see is companies forgetting to account for increased licensing costs.
It's particularly tempting to ignore the organizational changes that virtualization may spur, such as consolidating staff now comfortably in silos, managing servers, storage, desktops, or networking. Those kinds of decisions are easy to put off when IT sees the relatively low-hanging cost savings, such as server consolidation and power savings.
A framework such as ITIL offers a strong starting point to improve IT management and tackle the bigger-picture issues such as staff consolidation. Short of that, a first step any company can take is to build out a "most likely" model of how an operation will run after the virtualization wave. If you're well into virtualization, look down the road at how virtualization will expand and use that to force a discussion of the model IT operation. You will pick the virtualization level, but in all likelihood you should factor 60% to 70% server virtualization, 10% to 30% application or desktop virtualization, and a corresponding level of virtualization for supporting storage and infrastructure.
It's not an academic exercise because it will force the vision of one team. Virtualization at its truest form separates the hardware from the software layer, so there should no longer be a mainframe, Unix, X86, desktop, or cloud team. The resistance to the "one team" approach, though, is strong.
The market for IT management software has long been dominated by the super vendors--BMC, CA, Hewlett-Packard, IBM, and Microsoft. There's no miracle solution from any of them for virtualization management; while these vendors have systems that work across different platforms, none has 100% integrated monitoring and management. This group of vendors was surprisingly slow to expand support for virtual infrastructures but has quickly caught up. All of them, with the exception of Microsoft, offer multihypervisor support for VMware, Citrix, and Microsoft. The large vendors are now in tune with our survey respondents' main concern: interoperability with their existing tools.
Not surprisingly, this group has a new set of competitors that started within the virtualization sector and are now trying to migrate outward. The most noteworthy players include DynamicOps, Vizioncore, and Fortisphere. They all started out supporting VMware and have expanded to include support for other hypervisors. Some, such as DynamicOps and Vizioncore, have physical device support on their road maps.
The new-school and old-school vendors have one thing in common: They don't come cheap. Forrester Research estimates $140,000 for an enterprise-level system. Smaller organizations don't get off easy either--at a minimum, you're looking at a $30,000 to $50,000 initial investment.
The structured workflows and methodologies inherent in these systems aren't always a fit for IT organizations. That fact, combined with the fact that management systems have left some gaps in functionality, prompted many IT organizations to cobble together their own systems. It's interesting that newcomer DynamicOps was spun out from Credit Suisse after the team spent thousands of hours building its own system.
The decision for most companies naturally comes back to how much they can rely on the tools of the virtualization vendors themselves. VMware and Citrix have solid tools for managing their environments, and they're perfect for the initial deployment and setup. But eventually the rest of the operations must be integrated, especially as virtualization moves beyond the server to infrastructure and desktops. In addition, mixed platforms are the reality. VMware still dominates, but a whopping 64% of organizations use or plan to use more than one vendor's hypervisor.
We're not predicting a stampede to desktop virtualization. Only 8% of survey respondents have virtual desktops, with only 15% actively testing. Virtualization is more likely to spread to other parts of your infrastructure, including security applications, switches, and storage. Here we found a much higher percentage of respondents either under way or evaluating.
Companies have no doubt on server virtualization. A third plan to have 75% or more of their servers virtualized by the end of next year. More than 40% leverage the migration tools available, though with fairly conservative VM density on hosts, with half running five or less. The top drivers for virtualization are all tried-and-true: server consolidation, power savings, flexibility, and business continuity.
IT teams should also start thinking about the benefits from more accurately charging business units for computing capacity. Most chargeback models for x86 servers are weak at best. A great tip is to start looking at how cloud computing vendors are pricing. Most have service levels (good, better, best) that then charge based on CPU and storage usage. Get on this model now. Even if you're not moving toward cloud use in the near term, you can start building a way of pricing internal computing resources that will be relevant if you start comparing them with public cloud resources.
Think Differently About ROI
CFOs never forget. Virtualization's pitch has always been big and fast ROI. Moving up the stack to desktops and adding the required virtualization tools can change the discussion. Did your original ROI model for virtualization include $50,000 to $150,000 in systems management software? Did it include training for your staff? Recruiting new virtualization-centric engineers? Probably not. But these are the realities most organizations will face as their systems evolve.
It was easier to skip the management software; just use the VMware suite and focus the ROI solely on the act of virtualizing. But it's time to go beyond those simple models. As organizations continue their virtualization path, they need to connect their existing environments as well as deal with the potential of multiple hypervisors. The ROI models must cover the change that occurs after servers are virtualized and include management software.
Our survey suggests that very few organizations have planned for this expense. In fact, 34% of respondents have less than $10,000 to spend on management tools; another 52% have less than $50,000. And that includes big and small companies. Twenty-five percent of that group has more than 10 host servers in place.
Most organizations have relied on a sort of de facto oversight for provisioning and managing computing power since, with physical servers and other hardware, getting budget, finding space, and acquiring the gear took time and provided chances to block misuse. Without tight controls in a virtual environment, rogue capacity can be spun up almost anywhere, very quickly, consuming far more than a renegade server stuffed in a cubicle.
The time is now for a virtualization management strategy around the right tools and team structure. At a minimum, organizations must start rethinking their toolsets, making sure IT has the base requirements of monitoring and discovery to prevent the sprawl that can occur if operations aren't tightly controlled. Longer term, it's time to embrace disciplines such as ITIL, lean, or ISO, and rethink the team. Virtualization will effectively flatten your IT department if you're willing to do it.
In our survey, 1% of respondents, just five companies, have virtualized components across servers, desktops, infrastructure, and storage environments. They're a mix of organization sizes. All use multiple tools for different aspects of management, from initial migration to patch management to daily operations. They also have aggressive plans, targeting more than 90% server virtualization by the end of next year.
They also all have one other thing in common: All have budgets for management tools that are appropriate for their organization size. You can't be serious about wringing the benefits out of virtualization without getting serious about the need for sophisticated management and organizational change.
Michael Healey is president of Yeoman Technology Group, an engineering and research firm.
Write to us at email@example.com.