Red Hat Enterprise Virtualization 3.0, just over the horizon, poses a larger threat than Microsoft, long-term.
If you are one of those people who think VMware is about to see defections from its customer ranks over pricing, consider this: There's a threat out there that could cut into VMware's existing and future customer base. This rival shows a great deal of proficiency at running stream-lined, virtualized environments. But it's not Microsoft technology.
It's the 3.0 version of Red Hat Enterprise Virtualization, in beta and due out at the end of the year. Extremely late in getting to the virtualize-the-data-center party, RHEV 3.0 will include a full suite of provisioning, monitoring and management tools for the KVM environment. KVM? Isn't that number four in a three-way hypervisor race? There's VMware's ESX Server in the lead. Microsoft's Hyper-V is number two and Citrix' XenServer is number three. KVM, if it ranks as number 4, is far behind the top three and ahead of only one other candidate, Oracle's Oracle VM--not exactly testimony to KVM's strength.
How could KVM and Red Hat Enterprise Virtualization 3.0 have an impact on VMware?
The companies that have virtualized their data centers are sensitive on quality. Virtualization is an invisible infrastructure service. Any x86 hypervisor might be able to do the job, but with the movement toward virtualized production systems, only the most tested and most reliable hypervisor is going to make the cut.
Even with extensive experience available for four hypervisors (ESX Server, Xen, Hyper-V and KVM), the key ingredient is still trust, not price. You can monitor the hypervisor and measure its effects, but you can't literally watch it run. It works deep in the bowels of the server, below both operating systems and applications, next to the hardware. You have to have faith that it's up to the job.
Given what they're entrusting to the hypervisor, most IT departments would rather explain why they are spending a little extra money than why they need to stop production and apply hypervisor update patches. We are getting past this current stage of virtualization, moving toward more open-ended possibilities with different hypervisors. But until hypervisors are thoroughly known and reliable quantities in the data center, no one will get fired for buying VMware.
So isn't Red Hat out of its league as it talks up Red Hat Enterprise Virtualization 3.0? I got my first exposure to the beta release (announced August 17) from Navin Thadani, senior director of Red Hat's virtualization business. Thadani, much like Edwin Yuen, Microsoft's director of virtualization strategy, held court at the edge of the VMworld show floor recently in Las Vegas. Different in so many ways, Red Hat and Microsoft still had something in common at this show. Both were restricted by VMware to little ten-by-ten booths, despite big ambitions and a willingness to spend.
KVM as part of Red Hat Enterprise Virtualization can scale a virtual machine up to 64 virtual CPUs and two TBs of virtual memory, or "better than VMware," said Thadani. VMware's largest virtual machine under vSphere 5 uses up to 32 virtual CPUs and 1 TB of virtual memory.
Red Hat Enterprise Virtualization 3.0 can also perform live migration (VMware calls it vMotioning.) When Microsoft introduced live migration, it was not like VMware's smooth pick-up and transfer of a running virtual machine without the end user noticing. With Hyper-V there was initially a long pause, easily noticeable by the end user, and then a resumption of service. Microsoft has since eliminated the pause.
Thadani said Red Hat will deliver RHEV at the end of the year with the ability to migrate a virtual machine that is running high definition video without the end user detecting the move.
The hypervisor performs this feat in a similar way for all vendors. It first transfers all the parts of the application logic and data that will not be needed in the next few seconds. Then with the clock running down, it transfers the last bits to the new host in a few milliseconds, then lets it pick up the processing thread in the right place.
On a related point, VMware can vMotion a virtual machine any where it chooses on a 32-server cluster, the largest unit that can share VMware's storage file system, Thadani said. VMotioning can only take place on servers sharing the same storage file system; in VMware's case, that's a VMFS system.
Red Hat doesn't have a 32-node restriction. A KVM user till be able to live migrate from one host to any other on a much larger cluster, Thadani said. This can work with a variety of storage file systems, provided the one file system is used by the whole cluster that RHEV is managing.
Why Red Hat Still Has A Shot
But it's not stats like these that convince me Red Hat has a shot at coming late but still becoming a live wire at the virtualization party. Rather, I think knowledgeable cloud vendors, who already like Red Hat Enterprise Linux, are going to see a performance advantage in KVM and start using it themselves. Once they've gained sufficient experience, they'll offer it as a service option.
How can the youngest hypervisor have competitive performance? KVM or kernel virtual machine, unlike other hypervisors, is part of the Linux kernel and uses the Linux scheduler and memory manager. That means it can do its work as an extension of kernel operations. This eliminates the need to go outside the kernel or pass messages between the hypervisor and operating system, ESX Server, XenServer and Hyper-V do.
This was a theoretical advantage to me, until I checked out the SPECvirt benchmarks. Hardware vendors test their best models with SPECvirt, a standard test drawn up by the non-profit Standard Performance Evaluation Corp. Its workload is designed to reflect the combined operations of a real world task. It includes a web server, an application server, a mail server, a database server, an infrastructure server, and an idle virtual machine.
VMware's ESX Server and its embedded version, ESXi, own 7 of the top 17 SPECvirt benchmarks, including at the moment first place, third, fourth, fifth, seventh, sixteenth and seventeenth. KVM, however, owns the other ten spots. Citrix XenServer and Microsoft Hyper-V are somewhere further down the list.
If SPECvirt is a fair benchmark, then KVM's operation inside the kernel is an inherent advantage of Red Hat Enterprise Virtualization. It's also open source code, meaning cloud suppliers who want every performance edge may start adopting it as RHEV 3.0 becomes available. If cloud suppliers establish its performance and reliability, RHEV could spread very quickly among avid virtualization implementers in the enterprise. There are more of them every day.
If I were VMware, I wouldn’t worry most about Microsoft, with its tendency to subsume low-end, small business markets by including everything in the Windows operating system. That's so 1990s. Rather, I'd worry that Red Hat Enterprise Linux and KVM already have a foot in the cloud. It's possible RHEV and KVM will be taken up by some major suppliers. If they do, then forget about Microsoft as the challenger and start thinking about Red Hat.
The avid implementers of enterprise virtualization are also prospective cloud users. It's the hope of coordinating those internal virtual machines with similar, external VMs--the hybrid cloud model--that's behind some of their fervor. If KVM is a top performer, surrounded by self-provisioning, monitoring with trouble-shooting and sophisticated load balancing, as Red Hat says it is, then it's a challenger.
Knowing that it comes out of the rigorous Linux kernel development process will also be reassuring to IT managers. How many problems have they had keeping Linux up and running? The quality issue again comes into play, if Linux quality rubs off on KVM.
For Thadani, that was an assessment based on future conditions and he wasn't having it. "We're the alternative to VMware right now," he claimed.
I don't think so. Not yet. There's all that existing commitment to ESX Server, Hyper-V and XenServer. But maybe RHEV will get a start in the cloud--and then its day in the sun may not be far away.
InformationWeek Analytics has published a report on backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster? Download the report now. (Free registration required.)
Charles Babcock is an editor-at-large for InformationWeek.
InformationWeek Elite 100Our data shows these innovators using digital technology in two key areas: providing better products and cutting costs. Almost half of them expect to introduce a new IT-led product this year, and 46% are using technology to make business processes more efficient.
The UC Infrastructure TrapWorries about subpar networks tanking unified communications programs could be valid: Thirty-one percent of respondents have rolled capabilities out to less than 10% of users vs. 21% delivering UC to 76% or more. Is low uptake a result of strained infrastructures delivering poor performance?
InformationWeek Must Reads Oct. 21, 2014InformationWeek's new Must Reads is a compendium of our best recent coverage of digital strategy. Learn why you should learn to embrace DevOps, how to avoid roadblocks for digital projects, what the five steps to API management are, and more.