The university was able to lease a 1,300-square-foot facility with a 13-inch raised floor, but space was tight. The solution, provided by IBM, came in the form of dual-core Opteron servers and a chilled-water cooling system called Cool Blue. Inside the door of the unit, which is attached to the rear of a server rack, are sealed tubes filled with circulating chilled water that can remove up to 55% of the heat generated by a fully populated rack. "Without those technologies, you start doubling everything from square footage to utility costs and getting a lot less computing power back in exchange," Skolnick says.
The next advance will be quad-core processors, which Intel has said it will introduce in early 2007 and AMD plans to introduce by the end of next year.
Other vendors, like Azul Systems, offer systems based on processors with low clock cycles but multiple processing engines. Azul's Vega chip has 24 processing cores.
But "you can't get away from the laws of physics," admits Randy Allen, corporate VP of AMD's server and workstation division. "You give us more power and we can deliver more performance." AMD's low-power Opterons are for customers willing to trade unbridled performance for improved performance per watt, he says.
Cooling and power are the No. 1 facilities issue among members of AFCOM, an association of data center professionals. According to a recent survey of 200 AFCOM members, data centers are averaging more than one serious outage per year, with more than half caused by not having enough in-house power to run the centers. One in five data centers runs at 80% or more of its power capacity.
"The cost of power is going up dramatically, and they're just eating it and accepting it," AFCOM founder Leonard Eckhaus says. "It's like buying gas for your car. You have no choice. You have to pay."
A January conference in Santa Clara, Calif., sponsored by AMD, the Environmental Protection Agency, and Sun, attempted to raise industry awareness of cooling and energy issues. Andrew Fenary, manager of the EPA's Energy Star program, says the agency can help coordinate meetings among server makers, cooling-equipment manufacturers, microprocessor vendors, and data center managers to work on strategies for identifying problems and creating solutions. Fenary expects meetings to be held during the year to focus on technical metrics and develop ways to quantify the efficiency and power consumption of servers. The plan is to create a metric, possibly under the EPA Energy Star banner, that will let buyers more readily identify the efficiency of computer products.
But such metrics won't eliminate the "perverse incentives" that persist in some parts of the industry, says Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratory and a consulting professor at Stanford University. For example, some data center managers never even see the huge electricity bills their systems are generating. And, when steps are taken to reduce energy consumption, Koomey says, money saved "can just get sucked up into the corporate nexus and is then used for other general purposes."
Getting the situation under control is even more urgent because the industry is on the verge of an infrastructure build-out that has been on hold for several years, Koomey says. "How are we going to address the expansion coming in the next few years?" he asks. "If you don't pursue some type of energy-conscious strategy, you're betting that the price of energy is going to either stay the same or go down. I wouldn't want to bet my company's future on those odds."
Technologists are becoming acutely aware of the vicious cycle in which high-performance servers consume power and generate heat, which then needs to be cooled, and do so in greater numbers and with increasing density. The beads of sweat on their foreheads? It's an industry running to keep up with its own power-hungry creation.
--with Thomas Claburn
Illustration by Nick Rotondo
Chip Speed Vs. Power Demand,
and Uncool Data Centers Need To Chill Out