Server I/O was a big consideration for Tom Vaughan, director of IT infrastructure at Roswell Park Cancer Research Institute, the nation's oldest cancer research institute, which serves 26,000 patients and supports $81 million in research grants in a seven-block campus in Buffalo, N.Y. Its applications include Lawson Software financials, a Cerner lab management system, 5,000 Microsoft Exchange mailboxes, and an extensive electronic medical records system that gathers 10 TB of medical images and research data a year.
Avoiding outages and supplying fast response times was a challenge for Vaughan and his staff of six. "We're pretty Spartan here," he says. Roswell Park turned to VMware's vSphere 4 for help in managing virtualized parts of the data center. Vaughan found that heavy e-mail traffic and the need to juggle medical images and electronic patient records (Roswell doesn't keep paper patient records) and do patient billing and tracking presented heavy I/O demand for a traditional virtualized setting.
Part of the management challenge was outside Roswell's virtualized x86 servers. The institute still has a variety of systems and applications: its EMR system runs on an IBM AIX server; Exchange and other Windows applications run on multiple types of x86 servers; the Cerner Lab Management system runs on HP-UX; and the Lawson financials run on HP's OpenVMS. So in addition to deploying VMware virtualization, Vaughan is acquiring a more uniform environment with strong I/O characteristics that in the long run will be easier to manage. With two data centers on campus capable of backing each other up, Roswell invested in two HP BladeSystem Matrixes. Each blade is built with the same set of components; a patch to the operating system of one can become a patch used throughout the Matrix.
Vaughan has moved payroll onto the Matrix and will soon add the Exchange servers with their 5,000 accounts. However, many of the legacy non-x86 applications will take far longer to transition.
The Matrix systems consist of two c7000 blade enclosures, each with 14 blades and virtualized as VMware environments. Vaughan's team arranged the hardware, networking, and storage so that they're distributed across the two campus data centers. Half of the blade servers run in one data center; an identical set in the other. Likewise, identical SAN units are running in each location and linked together. A failure in one location can be recovered and operations continued in the other.
Vaughan will eventually have 200 VMs running under 22 ESX Server hypervisors on the Matrix enclosures. "The goal is to automate everything as much as possible," Vaughan says, rather than administer systems manually one at a time.
Storage, networking, and servers are linked logically in the BladeSystem's management interface. A systems architect can design a system through a graphical user interface, moving icons around and setting parameters to create a template system. Those templates can then be activated as virtual servers "in six minutes to six hours, instead of six weeks," Vaughan says. Bandwidth for different types of networks can be assigned from the management tool.