Virtualizing I/O on servers running multiple virtual machines is the primary way to get around the I/O chokepoint that begins to appear as a multi-core server becomes capable of hosting a dozen -- or dozens -- of hard working virtual machines. The virtual machines generate data that needs to be sent to storage or communications packets that must be moved to the Ethernet network.
Virtualized I/O allows undifferentiated or "converged" traffic to pour off the virtualized host into an intelligent network device that separates it into its constituent parts. A standard Ethernet network interface card thus can switch back and forth between serving as a high-speed networking device or a high-speed Fibre Channel storage device, or be divided between the two, under virtualized I/O.
Both Cisco Systems' Unified Computing and HP's BladeSystem Matrix are blade server chassis that possess high-speed, virtualized I/O capabilities based on Fibre Channel over Ethernet. Fibre Channel over Ethernet working at 10-Gb per second speeds is a more expensive choice than standard Ethernet cards. A Fibre Channel over Ethernet network interface card is typically priced at $1,500, said Jon Toor, VP of marketing for Xsigo.
Since I/O Director works at Infinband speed, it can use an Infiniband card in the server at a cost of $400-$600. Or it can work with the 10-Gb per second Ethernet connection built into the motherboard at no additional cost, Toor pointed out.
I/O Director itself ranges in price from $25,000 to $45,000. It can work with multiple virtualized host servers. It used to require custom cards on the virtualized server.
A network administrator or virtual infrastructure manager, working on the I/O Director's management console, can set peak rates of service for each virtual machine on the host. A virtual machine that was using 1 Gb per second of Ethernet bandwidth can be reassigned 10 Gb of bandwidth to meet a surge in customer demand, noted Toor. "You can change the I/O attributes on the fly, without stopping the I/O director," he said.
Under an Xsigo set up, the I/O Director is attached to the server through a 40 Gb per second Infiniband cable, or more likely, two cables for redundancy. Those two cables can be made to serve the same function as 64 cables attached to 64 network interface cards or host bus adapters, server devices used in moving network traffic, Toor said.
The latest x86 server motherboards from Intel and AMD include a 10-Gb Ethernet interface built into them. That capacity can be augmented by network interface cards that fill slots on the server, if the user needs more capacity.
Xsigo creates virtual network interface cards in I/O Director that each connect to its own virtual machine on the host server. The virtual device isolates the traffic of the virtual machine from that of other virtual machines sufficient to meet PCI or credit card industry standards, Toor said. "It's the equivalent of running a cable from a NIC on a server to a network switch or SAN device," said Toor.
Customers have the option of getting converged I/O connections at basic industry pricing. Xsigo customers will save up to $5,600 per server compared to the cost of converged and virtualized I/O from competitors, he claimed.
Toor said Xsigo customers average 15 virtual machines per host server, but some customers are running as many as 50. He expects the number to double in the next two years. "If you're serious about virtualization, we optimize virtualization," he said.