U.S. Navy Intranet Realizes Big Savings Through Virtualization

EDS has helped the military in consolidating 1,200 x86 servers down to 200, each hosting multiple VMware ESX virtual machines.

Charles Babcock, Editor at Large, Cloud

October 10, 2008

4 Min Read
InformationWeek logo in a gray background | InformationWeek

The Navy and Marine Corps Intranet, one of the largest networks in the federal government, is providing a lesson to the private sector on how to cut energy consumption and save capital costs.

The massive intranet ties together 700,000 users and disparate Naval facilities from the Marine Corps base at Quantico, Va., to centers and shipyards in San Diego, Hawaii, and Japan. The NMCI is run off 40 different server farms.

In 2001, managing NMCI was outsourced to EDS, the IT services group that recently became an HP subsidiary, and it won plaudits when it got the network up and running again one week after taking a hit in the Sept. 11, 2001, attacks. The Navy lost 30 of its servers and 70% of its space when an airliner hit the Pentagon that day.

Now EDS faces a different challenge: making the 2,700 physical servers that power NMCI a smaller target for charges of hogging power, consuming space, and draining the Navy's IT capital budget. Brandon Kern, the NMCI infrastructure manager for EDS, has taken a giant step in that direction by consolidating 1,200 x86 servers down to 200, each hosting multiple VMware ESX virtual machines.

That means Kern has installed an average of six virtual machines per host so far, and he'd like to achieve a higher, nine-to-one ratio overall as he tackles the next 1,500 physical servers. All in all, he thinks he can drop more than 2,700 servers down to 300. That will save $1.6 million a year in electricity costs.

"We will also save $1 million in our technology refresh budget next year," he adds. "Instead of $1.5 million, we'll spend $500,000. I only have to buy 300 servers instead of close to 3,000." Is it realistic to shoot for a 9:1 ratio in virtual machines per host? Kern says he's actually shooting for 12:1 or even a 13:1 ratio. The initial 9:1 leaves him with what he believes is enough headroom to expand on a set of 300 servers.

"We filled up the first 200 with the heavy hitters, Microsoft Exchange and other customer-facing services," he said of phase one of his program. In his second phase, he'll be dealing with less traffic-intensive applications.

It also matters what type of server you try to virtualize. Kern's server of choice is the Dell PowerEdge R900 rack server, a four-way system running quad-core CPUs. It's equipped with 32 GB of memory, six network connections and six host bus adapters for storage traffic. In other words, it's a heavily loaded blade server designed to host multiple virtual machines. HP and IBM also produce blades optimized for virtual machine loads.

In terms of customer satisfaction, he said the consolidated servers not only chew up 50% less energy, but they have 50% less downtime as well. "If one server fails, the virtual machines are moved to another server" and users scarcely notice the outage, he said.

The network previously relied on a form of Windows clustering that moved application services from one copy of the operating system to another. Now he relies on Vizioncore's vRanger to protect against server or cluster outages. It's more efficient, he said, to boot a new virtual machine out of storage and let it resume an application's processing than to move workloads around.

Through his experience with virtualization, he's beginning to think differently about disaster recovery as well, not single server outages but "the smoking hole type of scenario," he said, invoking the 9/11 experience.

With a heavily virtualized data center, exact copies of hardware don't need to be standing by to run an organization's software infrastructure. It can be stored as a set of virtual machines, each representing an operating system and middleware stack, at another data center, and brought up to run quickly in response to a site's disaster. The alternative center can be part of his own organization, a rented facility, or part of someone else's infrastructure in a cooperative agreement where each party maintains surplus computing power for an emergency. He's thinking further savings are possible through such an arrangement and VMware's Site Recovery Manager is the right tool for the task.

Kern issued one caution. The more you virtualize the data center, the more you will need to load up on storage. Saving copies of virtual machines takes space. And virtual machines tend to proliferate. But virtualization also makes it easier to perform maintenance without disruptions and to provision and manage new servers.

Virtualization "is a change in the way you do business and the way you manage IT... . Virtualization doesn't just touch one team. It touches the Active Directory team, the SQL Server team... . They all have to understand it's a bigger challenge than we expected," Kern said.

EDS started managing the NMCI in 2001 under a five-year contract. The Navy extended the contract for three years in 2006, raising its total value to $6.9 billion.

About the Author

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights