Advanced Micro Devices Inc. and Intel are developing 64-bit processors that will make use of Xen hypervisor, while Linux providers Novell and Red Hat are working with XenSource to provide support for users consolidating server environments. Hewlett-Packard and IBM are contributing code to the Xen project and working with XenSource to develop new uses for the technology.
XenSource in January launched with a $6 million round of funding led by Kleiner Perkins Caufield & Byers and Sevin Rosen Funds. To succeed, they'll have to take on well-established competitors, since Microsoft and VMware, a subsidiary of storage maker EMC, offer proprietary software that can be used to create virtual servers on Intel or AMD x86-based servers. Yet in a market where business-software buyers increasingly welcome an open-source alternative, XenSource could find an opening. "[Xen] is still very immature, but it offers a lot of promise that will be realized by first by Linux users and then in other environments," predicts IDC analyst Dan Kusnetzky.
Xen, which is licensed under the GNU General Public License, works on servers running any open-source operating system, including Linux and NetBSD, with ports to FreeBSD and Plan 9 under development. When Intel and AMD deliver new processors within the next year that support virtual servers on the chip level, Xen should be able to run on proprietary operating systems as well.
A more subtle difference between the Xen hypervisor and competing proprietary technologies is that Xen keeps cache memory that records the state of each virtual server and operating system. XenSource does this through "para-virtualization," or splitting the operating-system drivers in half. By doing this, one half operates the virtual server while the other half can be kept as a separate domain where this cache memory can be stored. "This saves users time and resources when switching between virtual servers," says Simon Crosby, VP of strategy and corporate development for XenSource.
Yet Xen's potential for widespread is clearly tied to chip-level advancements Intel and AMD are promising to deliver later this year.
Intel's Vanderpool processor technology and AMD's Pacifica processor will offer an interface with proprietary operating systems in a way that Xen hyperlink can't do on its own. "AMD and Intel will make the hypervisor's job easier, particularly for operating systems for which source code is not available," Crosby says.
XenSource says that the next version, Xen 3.0, will come out by September. By the end of this year, 64-bit AMD and Intel technology also is due out, including Intel's Vanderpool. Version 3.0 will also let users create virtual machines that run applications requiring multiple processors.
Xen enters a market poised for heavy growth over the next few years, says IDC analyst Dan Kusnetzky. The market for virtual environment software, including management and security tools, reached $19.3 million worldwide in 2004 and will grow 20% annually through 2008.
For Xen to appeal to business users, the technology needs support from major Linux backers such as Red Hat and Novell, in addition to XenSource. While Red Hat and Novell have some developers working on the project, they haven't integrated Xen support into their existing services, a move that XenSource predicts is likely given those companies strong interest in the growth of Linux. "People in large enterprises like to buy from a single vendor, so we expect Red Hat and Novell to offer support [for Xen] with their products," Crosby says. XenSource will also support Xen and will by the end of the year develop utility software with a graphical interface that can be used to manage the technology. "This is an absolute requirement so that more people can use it," Crosby says.
XenSource will in April host a Xen developer summit to determine how the technology should progress. "It's an essential piece in the open-source process," Crosby says. One of the issues likely to be discussed is security. "We're still figuring out how to make the hypervisor more secure," he says. IBM has a project under way called Secure Hypervisor to create a run-time environment that securely manages context switching between virtual servers. The goal is to prevent unauthorized information transfers between virtual servers and to ensure that all virtual servers are governed by the same security policies, Crosby says.
Most Xen users are still in the experimental stage. "The lion's share of people pushing it out are in the hosting world and those who are running it in large data center deployments in banks and Fortune 500 companies," Crosby says. "They're still learning about it."
One Xen pioneer is stretching where virtualization can be applied, exploring whether virtualization techniques can be used to create a network router that can segment its internal resources, such as CPU cycles, memory, and network bandwidth. "As such, I am exploring the possibility of running routers on multiple virtual machines (or domains in Xen terminology), with one virtual machine router (routelet in my project) for each network flow requiring quality of service guarantees," Ross McIlroy, a research student at Scotland's University of Glasgow, says in an E-mail interview.
McIlroy's goal is to partition the flows, preventing one overloaded flow from impacting the service provided to another quality-of-service flow. The success of this experiment could have a positive impact on applications that transport isochronous data across a network, for example teleconferencing or voice-over-IP applications, which require a network providing quality of service.
McIlroy knows that his project isn't exactly what Xen's creators had in mind. "Xen provides me with an ideal basis for the creation of a prototype router which should test [my project's basic] theories," he says. McIlroy has found Xen's para-virtualization technique useful in reducing the overhead that's normally generated when creating a virtual environment.