Microsoft's Hyper-V virtualization software will have an advanced "synthetic device" approach to virtual machine I/O, addressing a key virtualization bottleneck, when it gets added to Windows Server 2008 later this year.
The production version of Hyper-V may be ready before Microsoft's oft-stated August delivery date. Knowledgeable observers, such as Paul Ghostine, CEO of Provision Networks, a supplier of desktop virtualization under Hyper-V, says Microsoft will try to add Hyper-V to Windows Server 2008 by June or July.
Microsoft unveiled Hyper-V as beta software Dec. 13, earlier than expected, and would like to maintain that track record, Ghostine said in an interview.
The beta version of Hyper-V is already operative in Windows Server, and customers may download the more advanced Hyper-V Release Candidate 1 if they want to test drive it prior to its addition to Windows Server 2008.
As Hyper-V comes to market, it will have a number of features to distinguish it from the competition, said Jeff Woolsey, senior program manager in the Windows Server Division, and Mike Schutz, director of product management.
One of the most obvious will be its integration with Virtual Machine Manager, part of Microsoft Systems Center. In the familiar Systems Center console, a Windows administrator can view his physical and virtual assets alongside each other and manage them with the same controls, wizards, and System Center operations.
Hyper-V has been given Windows Management Instrumentation, a way of extracting measure on running software, so that Systems Center can monitor the functioning of virtual machines and see the status of workloads, Woolsey said in an interview. The virtual machines can also be managed remotely.
Hyper-V is also one of the roles that Windows Server may assume as it comes out of the box. Instead of designating a particular Windows Server copy as a file server, a print server, or a Web server, it will be possible to declare it a virtual machine host and have Hyper-V in operation as the operating system gets booted. "Just click on a check box and you're done," Woolsey said of the new role-defining feature.
Perhaps one of its most distinctive features is the built-in I/O optimization. As virtual machines are stacked up on powerful, multicore hardware, server I/O may get bogged down if several virtual machines experience high I/O demands at the same time. Microsoft has built what it calls "a synthetic device" into Hyper-V. It gets away from its predecessor Virtual Server and Virtual PC products' reliance on device emulation.
With emulation, the hypervisor has to call an emulation program, software that mimics, say, the operations of a network adapter card. The emulation can't run in the same partition as the hypervisor, so overhead was created as the hypervisor told the emulation program what the virtual machine was seeking, and the emulation program took the handed-off instruction and executed it, repeating the process many times. The back and forth "was extremely expensive in performance overhead," said Schutz.
With Hyper-V, a synthetic device that knows how to make use of the native Windows drivers is substituted for the passing of messages between partitions, generating a quicker path to the I/O channel. Under Hyper-V, a virtual machine's access to the hard drive, mouse, video card, network adapter card, or other I/O devices will be managed by a synthetic device that has "driver enlightenment," or knowledge of Windows drivers, not an emulation program, he said.
Indeed, the phrase "driver enlightenment" springs from the Xen open source hypervisor's similar approach. XenSource, now owned by Citrix, a close Microsoft partner, helped Microsoft engineer this capability into Hyper-V for Linux.
Novell, another close Microsoft partner, supplied its knowledge of Linux running in a virtual machine to ensure that Linux will behave predictably in Hyper-V virtual machines, Woolsey said.
When Windows Server 2008 Enterprise or 2008 Data Center is running on a cluster, Hyper-V will be integrated with the failover capabilities of the cluster. If one node of the cluster goes down, another picks up its workload without loss of data, including all the virtual machines on the failed node. Any failed hardware component or software fault will disrupt the operation of all the virtual machines on that piece of hardware, multiplying the risk factor present when a server is only running one application.
"Virtualization is a fantastic technology, but it creates a single point of failure for multiple [virtual] servers," Woolsey said. "If I yank the power cable [on a server in the cluster], all its virtual machines will failover to other virtual machines," he said.
Virtual machines generated by Hyper-V may use as their operating system not only 32- and 64-bit Windows but also Red Hat Enterprise Linux, Red Hat Fedora (community version), or Novell SUSE Linux Enterprise.
Like other hypervisors, Hyper-V will be equipped to handle symmetrical multiprocessing, meaning it's ready to run on four-way servers.