Another group of vendors, lead by NextIO with newcomers Aprius and VirtenSys, promise products that will extend the server's PCIe slots through a switch to an external I/O chassis containing additional PCIe slots. Conceptually much like the PCI SIG's Multiple Root-I/O Virtualization (MR-IOV) standard (see sidebar), but without the need for I/O cards to have MR-IOV support, these systems use a low-cost -- around $200, versus $1,500 for a converged network adapter -- stateless PCIe extender card so servers use the I/O devices drivers unmodified.
PCIe-based solutions can share any Single Root-I/O Virtualization (SR-IOV) compatible card allocating their virtual interfaces to hosts as virtual devices; cards that don't support IOV are assigned to a single host, allowing video, data acquisition, and other specialized cards to be shared across multiple hosts, albeit sequentially.
All three vendors have Serial ATA or SAS/SATA drive bays in their I/O expansion chassis. This lets them create a shared direct-attached storage pool allocating logical drives from an SR-IOV RAID controller in the chassis to hosts, which diskless servers can use for boot or other local storage at a lower cost than a boot from the SAN.
Virtual I/O may have its biggest impact in the blade server market, where a smaller number of I/O channels helps vendors increase server density. Alliances are starting to form, with IBM integrating NextIO technology in its BladeCenter HT and Dell reselling Xsigo's I/O Director.
As a temporary solution until FCoE takes over the world, a lower-cost consolidation point between servers and end-of-row FCoE switches, or a long-term solution, virtual I/O could be worth a look for the more adventurous. On the other hand, those that use new technologies without major-player support always take the risk their virtual I/O system will look like Token Ring in a year or three.
Howard Marks is chief scientist at Networks Are Our Lives, a consulting firm.
Illustration by Jupiterimages
Virtualization Success Is In The Cards