HP Improves Its Data Center In A Box

To boost the manageability of its EcoPod containerized servers, HP added a service aisle for IT staff. Do these power-savvy and space-saving pods beat the competition from the likes of Dell?

Charles Babcock, Editor at Large, Cloud

June 6, 2011

4 Min Read
InformationWeek logo in a gray background | InformationWeek

Slideshow: Apotheker Takes The Stage, Paints An HP Cloud Vision

Slideshow: Apotheker Takes The Stage, Paints An HP Cloud Vision


(click image for larger view)
Slideshow: Apotheker Takes The Stage, Paints An HP Cloud Vision

HP is ready to show you how you can add 4,400 servers to your data center in 12 weeks, whether you have space for them or not.

It's become a manufacturer of containerized servers, or EcoPods. These servers ship ready to run, once power and cooling connections are plugged into the shipping container that's dropped off at your site. The idea was pioneered by Sun Microsystems and several smaller manufacturers, such as Verari, which shipped containers to NASA for its Nebula cloud, and i/o Data; the concept has also been picked up by Cisco, as well as Dell's data center solutions unit. Microsoft built the first floor of its Chicago cloud data center with such containerized servers.

HP has brought its own ease-of-management ideas to the process. Many containerized servers are so tightly packed that only a small human being can reach the back side of the rack where the cabling connections are or where a server would be extracted and replaced. In fact, it's assumed the whole load will be replaced with a new one, once a certain number of servers inside the container have failed.

HP has a different idea. It is calling its approach the "Converged Data Center" because networking, computing cycles, and storage have been combined into one unit, ready to be operated as virtual resource pools. It's built a service aisle into a combined package of two containers aligned alongside each other and connected by an eight-foot corridor, "a shared hot aisle," so servers can be replaced and cabling checked as needed, said Glenn Keels, director of marketing for HP's Hyperscale business unit. The shared space is called a hot aisle because the cooling air passes through devices and exits into the aisle, then is drawn out the top.

"It's maximum density with no compromise on serviceability," said Keels. To service a rack in some designs, a rack literally has to be disconnected and moved inside the container, he said.

In addition, HP executives are saying this approach can be a big electricity economizer compared to the traditional, raised-floor, and air-conditioned data center. Instead of spending $1.2 million for electricity for 4,400 servers over a year in a raised-floor, air-conditioned data center, the EcoPod will require $55,000 worth of electricity" under normal operating conditions, he said. That's primarily because the cooling systems are as close as possible to the devices they are cooling and offer efficient energy management.

Another measure is the power unit efficiency of the EcoPods, a new standard known as PUE, used to measure how much of the electricity coming into a data center is actually accomplishing computing tasks. The traditional bricks and mortar data center has a PUE of 2.4; it consumes more than twice as much energy as needed to complete the compute task. An EcoPod has a PUE of 1.05, when using only ambient air cooling, and 1.15-1.3 when using its air conditioning systems. Ambient air between 55 and 90 degrees is normally able to keep the pod sufficiently cool. "The EcoPod doesn't have to be in Norway," said Keels.

In many climates, a pod can be largely cooled by ambient air pumped into the container and exhausted after heating. The container's cooling fans can run at different speeds, and each device inside is drawing air across itself in the volume needed to keep it functional. As the outside temperature approaches 90 degrees, users of the EcoPod can switch on air conditioning units that can be added to the top of the container.

Asked if the EcoPod with its shared hot aisle is simply three shipping containers with some sides removed and bolted together, Keels said it was more complicated than that, but that image offered a fair picture of what the assembled unit looked like.

The pods are designed as Tier 3 data centers, meaning one source of power can fail and the unit still functions because of redundancy built in. They come with uninterruptible power supplies and backup generators.

It would cost $33 million and take two years to build a bricks and mortar, raised-floor, Tier 3 data center capable of carrying a 1.8-megawatt load of computing. For $8.3 million, the EcoPod can be dropped off on a prepared concrete base, whether in a warehouse-like addition next to the data center or inside a security fence outdoors, in 12 weeks. The EcoPad is weatherproof, Keels said. It will become available in the fourth quarter. Keels said existing users include HP internal IT and a handful of others who could not be named.

Security concerns give many companies pause as they consider migrating portions of their IT operations to cloud-based services. But you can stay safe in the cloud, as this Tech Center report explains. Download it now. (Free registration required.)

About the Author

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights