Feature
News
8/30/2007
06:15 PM
Art Wittmann
Art Wittmann
Features
Connect Directly
LinkedIn
Twitter
RSS
E-Mail
50%
50%

The Cold, Green Facts

Buying energy-efficient technology isn't the only--or even the best--way to cut down on energy consumption in the data center. Rethinking the way you use the technology you already have can make a bigger impact.

CHANGING BEST PRACTICES

The good news is that for most organizations, the pressure to remodel or build new data centers can be alleviated through improved server and storage hygiene. But even as you get more out of existing data centers, new challenges threaten long-held best practices. As certain racks become more densely populated with 1U servers and blade systems, using perforated floor tiles on a raised floor no longer supplies enough cold air for the systems in the rack. For facilities built in the last decade, typical raised-floor cooling systems can exhaust 7 kilowatts per rack. Even today, most data centers won't use that much power per rack, but in certain instances, they can use far more. For example, a fully loaded rack of blade servers can draw 30 kilowatts or more--only specialized, localized cooling systems can handle that sort of per-rack load.

In the past, the advice was to spread out the load. Put blade servers and other high-powered gear in with lower-consumption storage and networking systems, or simply leave the racks partially empty. While it's still good advice for those who can pull it off, increasingly the geometry of the data center doesn't allow it. Spreading out the load can push the average power draw per rack beyond what most data centers can deliver. The answer then is to pull those high-demand systems back together and use rack-based or row-based cooling systems to augment the room-based air conditioning.

Rack-based cooling systems are available from a number of vendors. Two with very different approaches are IBM and HP. IBM's eServer Rear Door Heat eXchanger replaces the back door of a standard IBM rack. The door uses a building's chilled water supply to remove up to 55% of the heat generated by the racked systems.

The benefit to this approach is its simplicity and price, which is as low as $4,300. The system, introduced two years ago, removes heat before it enters the data center. By lowering the thermal footprint of the racked equipment, the IBM system can move the high-water mark from 7 kilowatts per rack to about 15 kilowatts, a nice gain for the price. The only downside is that the IBM solution requires water pressure of 60 PSI. Not all building systems can supply that much pressure, particularly if there will be a lot of these racks deployed.

HP's solution is more comprehensive, takes more floor space, and costs considerably more. Introduced last year, its Modular Cooling System also uses the existing chilled water supply but adds self-enclosed fans and pumps. The result is a self-contained unit that can remove 30 kilowatts of heat with no impact on the room-based cooling system. Taking your hottest-running, most-power-hungry systems and segregating them into a rack that removes 100% of their generated heat goes a long way toward extending the life of a data center. The racks cost $30,000 a piece, but if it means not building new data centers, they're worth it.

If you already own the racks and simply want a method for extracting large amounts of heat, Liebert makes systems that mount on or above racks. The company says that its XD systems remove up to 30 kilowatts per rack.

Finally, row-based systems such as Advanced Power Conversion's Infrastruxure and Liebert's XDH use half-rack-width heat exchangers between racks of equipment. The heat exchangers pull exhaust from the back, or hot-aisle side, of the racks and blow conditioned air out the front. Because these systems substantially limit the ability for hot exhaust air to mix with cooled air--with APC's product, you can put a roof and doors on the hot aisle for full containment--they can be much more efficient than typical computer room air conditioning, or CRAC, units. Where CRAC units can draw as much as 60% of the power required by the systems they're meant to cool, APC says its system can draw as little as 40%.

Any of these systems will go a long way toward extending the life of a data center. However, if the limiting factor is the capacity of the cooling towers on the building's roof--that is, the ability of the building's existing systems to produce chilled water--then deploying these rack and row solutions is practical only if you shut off some of your existing CRAC units. The good news is that, quite often, you can do just that.

Overcapacity in CRAC units is easy to determine. If you need to put on a sweater, or perhaps a parka, to go into your data center, you have more room-based cooling than you need. With proper planning, the ambient temperature of the data center can be as high as 78 degrees, says HP's Paul Perez, VP for scalable data center infrastructure. Most data centers run at ambient temperatures well below 70 degrees. Perez says that for each degree of increased ambient temperature, figure at least a few percentage points in reduced energy consumption for cooling systems.

Previous
4 of 5
Next
Comment  | 
Print  | 
More Insights
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest, Dec. 9, 2014
Apps will make or break the tablet as a work device, but don't shortchange critical factors related to hardware, security, peripherals, and integration.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
Join us for a roundup of the top stories on InformationWeek.com for the week of December 14, 2014. Be here for the show and for the incredible Friday Afternoon Conversation that runs beside the program.
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.