Google improves power usage effectiveness in its data centers for the third year in a row. Biggest overhead hog is equipment cooling costs.
Big Data Talent War: 10 Analytics Job Trends
(click image for larger view and for slideshow)
Running rows and rows of servers, storage appliances and networking equipment makes the electric bill a data center receives at the end of each month a whopper. That's why data-center operators are constantly working on ways to improve energy efficiency. A couple of tech companies in Silicon Valley have reported considerable progress in improving energy efficiency, and a national organization shows the average energy efficiency also improving, though some data-center operators are having more success than others.
Energy efficiency of data centers is measured by what is called the power usage effectiveness (PUE) rating, which is a ratio of the energy consumption of the servers, storage, and networking equipment in a data center to the energy consumption of everything else, including the lights, cooling equipment, air circulation systems, and other items, collectively referred to as overhead. If a data center has a PUE of 2.0, that means that for every kilowatt hour of electricity used to generate compute cycles, another kilowatt hour is being consumed by overhead. Ideally, the PUE should be as close to 1.0 as possible.
According to the Uptime Institute, which tracks these statistics, the average PUE for data centers surveyed in 2011 was 1.83, which means that overhead energy costs were 83% of the cost of electricity for the computing equipment. That was from a global survey of 525 data center operators, 71% of whom were in North America.
Google recently reported a PUE of just 1.14 at the end of 2011, an average of all its data centers, meaning overhead energy usage was only 14% of computing electricity consumption. "I'm delighted to see that," said Joe Kava, senior director of data center construction and operations at Google, who blogged about its numbers March 26. The 2011 PUE is an improvement over the 1.16 rating of 2010, and a PUE of 1.22 when Google first began tracking energy usage in 2008.
Data-center operators have a number of different levers at their disposal to manage energy consumption, Kava said, but the biggest energy hog in the overhead is the cost of cooling the equipment. More than 70% of the energy savings opportunity is in cooling equipment. "The biggest and easiest way you can do that is to turn up the temperature in the data center just like in your house," he said.
InformationWeek is conducting a survey to gauge the state of enterprise data centers in 2012. Upon completion of our survey, you will be eligible to enter a drawing to receive an 32-GB Apple iPod Touch. Take our State Of The Data Center Survey now. Survey ends April 20.
InformationWeek Tech Digest August 03, 2015The networking industry agrees that software-defined networking is the way of the future. So where are all the deployments? We take a look at where SDN is being deployed and what's getting in the way of deployments.