Good news: Data center power use didn't grow nearly as fast as predicted the past five years. You can thank cloud computing and new data center designs.

Charles Babcock, Editor at Large, Cloud

August 3, 2011

4 Min Read

Facebook engineers told Kerby they want to do away entirely with the small fans, one of the fixtures of x86 servers since their inception, to save more energy. But they know they haven't monitored and tested their facility airflow enough to take that step. Eliminating the small fans, however, would take power consumption down another notch.

Kerby noted that Facebook and other cloud data center builders depart from a traditional data center approach of bringing in air conditioned air through a raised floor at the base of a server rack, its coolest point. They're showering cool air down on racks from the top--starting at their warmest point, another gain for simple cooling.

These data centers are also typically on a power grid away from a metropolitan center and close to a source of inexpensive, wholesale power. Yahoo built a big data center in Lockport, N.Y., 20 miles from the cheap hydropower of Niagara Falls. Google and Amazon have built near hydropower dams on the Columbia River in eastern Oregon. Cool air and chilly water are also low cost assets in these locations.

The best measure of what's happened to data center power consumption is the PUE or Power Usage Effectiveness measure. It's a measure of the amount of power delivered to the data center versus the amount actually used in executing computing. A 2.0 PUE means your data center uses twice as much power as needed by the computing workloads. A range of 1.92-2.0 typically applies to most enterprise data centers. You're using as much power to keep the lights on, the door card readers working, and the cool air wafting in as you are in driving the computer equipment.

Google set off an excellent arms race two years ago when it announced it had pushed its data centers' PUE down to 1.22 and then, in its most modern data center, 1.16. That means, of course, that more of the power being delivered to the facility is being used in computing, less to keep the lights on and the air cool.

Prior to Google announcing its PUE, the former Sun Microsystems had an impressive PUE of 1.28 at its Santa Clara, Calif., data center. Yahoo opened its Lockport data center in September 2010 with long narrow hallways guiding air movement; it had a PUE of 1.08.

Facebook, wanting to announce it had arrived in the big leagues, held a press conference April 7 this year to say Prineville had a PUE of 1.07.

In effect, all of these new generation data centers have cut out 38%-40% of normal energy consumption out of their operations. In addition to cooling economization, they do a number of other things, including bringing power into the data center and distributing it at high voltage for less power loss.

There's a cautionary note, however, in what is otherwise strong progress in reducing energy consumption.

Jim Trout is an expert in data center design as CEO of Vantage Data Centers, in Santa Clara, Calif., which is a builder of wholesale data center space. He agrees that facility designs have been a major factor in slowing the growth of electricity consumption. But he warns that, while additional gains are still to come in the new data centers from greater virtualization and power management inside the server components, "the low hanging fruit has already been picked" in the new designs.

Nonetheless, most enterprise data centers have not modernized the way the big Web app and cloud computing vendors have and still consume electricity with a PUE of 2.0. For them to match the Google/Amazon/Facebook efficiencies will be very difficult, given their legacy systems, Trout said in an interview. But in some cases, enterprises are evolving a strategy of placing some workloads in an efficient cloud center, while gradually modernizing the data center, he added.

Given the potential power savings, I think adoption of this hybrid strategy is going to accelerate. The process of moving work into the most efficient facilities will help to slow down increased consumption of electricity. Energy conservation has rarely been advanced as a reason for adopting infrastructure as a service, but maybe humanity's expanding appetite to compute has found a proper destination. Computing in the cloud reduces computing's impact on the earth.

IT is caught in a squeeze between requests for new applications, services, and device support and demands from upper management to keep budgets lean, staffing light, and operations tight. These are irreconcilable objectives as long as we spend the vast majority of our resources on legacy services. Read our report now. (Free registration required.)

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights