Cloud // Infrastructure as a Service
Commentary
8/3/2011
03:24 PM
Charles Babcock
Charles Babcock
Commentary
Connect Directly
Twitter
RSS
E-Mail

Data Centers May Not Gobble Earth, After All

Good news: Data center power use didn't grow nearly as fast as predicted the past five years. You can thank cloud computing and new data center designs.



The use of the Internet is exploding, but fortunately, the consumption of electricity isn't increasing at the same pace. That's true in part because, pound for pound, computers in the cloud run more efficiently than those in a traditional data center.

The Environmental Protection Agency predicted in 2007 that the amount of electricity consumed by data centers would double between 2005 and 2010, based on the rate of new construction it was witnessing. Google, Amazon, Facebook, and dozens of other participants in the digital revolution, including co-location service providers and managed service hosts, have built new data centers, as projected. In spite of that, electricity growth for data center use over the five-year period in the U.S. was 36%, not 100%.

That's the figure that Jonathan Koomey, consulting professor in civil and environmental engineering at Stanford University, came up with as he re-examined the EPA's projections. He attributed the slowdown primarily to the recession and secondarily to more efficient technologies, such as virtualization, slowing demand for new servers.

Still, John Markoff, writer for the New York Times, was impressed with how far off consumption was from the projected 100% increase. "The slowdown in the rate of growth of electricity use is particularly significant because it comes in the midst of the biggest build-out of new data center capacity in the history of the industry," Markoff wrote in his column July 31.

If a massive build out has occurred, why didn't electricity consumption grow accordingly?

I suspect recession, virtualization and more efficient chips do not account in full for this fall-off from the EPA's original projection. Neither the EPA's report or Koomey's second look at electricity consumption get the complete measure of what's happened in data center design.

A significant share of new data center construction has been by the new companies succeeding on the Web--Zynga, Facebook, Apple, Google, Amazon.com. But they account for a relatively small percentage of the total number of data centers, with enterprise centers far outnumbering this new construction. Google is probably constructing data centers on an annual, if not continual, basis, or leasing space in wholesale data centers. No one knows for sure, but it is believed to have at least 36 at this point. Google, a pioneer of modern data center design, told Koomey that its data center power use, while large, constituted less than 1% of the worldwide power consumed by data centers. Its pattern of efficient data center construction, which other vendors are emulating, could have had a depressing effect on the projected energy consumption growth rate.

Google has played its cards close to the vest on what constitutes advanced data center design. It builds its own servers, then places them in racks with baffles of its own design that manage the air flow through the racks. It uses a water evaporation system for cooling ambient air as a substitute, when possible, for running air conditioners, a process that's now known as economization in cooling. Amazon, Microsoft, and others have followed Google's lead and come up with their own variations.

Facebook also designs its own servers, racks, and data centers. Unlike Google, it published the specifications April 7 at its OpenCompute.org site. Facebook itself showed that efficient data center design can be specific to a facility's location. In Prineville, Ore., Facebook maximized economization by building where it's cool and dry much of the year. The low humidity of the high desert creates a good climate for evaporation, and evaporation is used to reduce the peak summer outside temperature of 85 as much as 35 degrees as they bring outside ambient air into the data center, said Brent Kerby, AMD's guru of server power management, who visited Prineville two months ago.

Facebook then uses an air filtration and distribution system to push the air through big overhead fans down into the "cool" aisle of the data center, where it flows across server motherboards that have been removed from any kind of casing. Cool air is propelled by the servers' own fans across the exposed components, picking up heat and exiting into the hot aisle. The hot aisle collects warmed air coming off two adjacent rows of servers. It is then pumped out of the building.

Kerby said walking down the hot aisle in some data centers is an uncomfortable experience. At Prineville, "What most impressed me was the whole data center climate control system. This was the first data center where I have walked down the hot aisle and it wasn't that bad. It was truly amazing to me how they set up their air flow control," he said in an interview. He added the hot aisle was warm, 90 degrees, but other hot aisles he's encountered have been warmer and more humid.

Facebook engineers told Kerby they want to do away entirely with the small fans, one of the fixtures of x86 servers since their inception, to save more energy. But they know they haven't monitored and tested their facility airflow enough to take that step. Eliminating the small fans, however, would take power consumption down another notch.

Kerby noted that Facebook and other cloud data center builders depart from a traditional data center approach of bringing in air conditioned air through a raised floor at the base of a server rack, its coolest point. They're showering cool air down on racks from the top--starting at their warmest point, another gain for simple cooling.

These data centers are also typically on a power grid away from a metropolitan center and close to a source of inexpensive, wholesale power. Yahoo built a big data center in Lockport, N.Y., 20 miles from the cheap hydropower of Niagara Falls. Google and Amazon have built near hydropower dams on the Columbia River in eastern Oregon. Cool air and chilly water are also low cost assets in these locations.

The best measure of what's happened to data center power consumption is the PUE or Power Usage Effectiveness measure. It's a measure of the amount of power delivered to the data center versus the amount actually used in executing computing. A 2.0 PUE means your data center uses twice as much power as needed by the computing workloads. A range of 1.92-2.0 typically applies to most enterprise data centers. You're using as much power to keep the lights on, the door card readers working, and the cool air wafting in as you are in driving the computer equipment.

Google set off an excellent arms race two years ago when it announced it had pushed its data centers' PUE down to 1.22 and then, in its most modern data center, 1.16. That means, of course, that more of the power being delivered to the facility is being used in computing, less to keep the lights on and the air cool.

Prior to Google announcing its PUE, the former Sun Microsystems had an impressive PUE of 1.28 at its Santa Clara, Calif., data center. Yahoo opened its Lockport data center in September 2010 with long narrow hallways guiding air movement; it had a PUE of 1.08.

Facebook, wanting to announce it had arrived in the big leagues, held a press conference April 7 this year to say Prineville had a PUE of 1.07.

In effect, all of these new generation data centers have cut out 38%-40% of normal energy consumption out of their operations. In addition to cooling economization, they do a number of other things, including bringing power into the data center and distributing it at high voltage for less power loss.

There's a cautionary note, however, in what is otherwise strong progress in reducing energy consumption.

Jim Trout is an expert in data center design as CEO of Vantage Data Centers, in Santa Clara, Calif., which is a builder of wholesale data center space. He agrees that facility designs have been a major factor in slowing the growth of electricity consumption. But he warns that, while additional gains are still to come in the new data centers from greater virtualization and power management inside the server components, "the low hanging fruit has already been picked" in the new designs.

Nonetheless, most enterprise data centers have not modernized the way the big Web app and cloud computing vendors have and still consume electricity with a PUE of 2.0. For them to match the Google/Amazon/Facebook efficiencies will be very difficult, given their legacy systems, Trout said in an interview. But in some cases, enterprises are evolving a strategy of placing some workloads in an efficient cloud center, while gradually modernizing the data center, he added.

Given the potential power savings, I think adoption of this hybrid strategy is going to accelerate. The process of moving work into the most efficient facilities will help to slow down increased consumption of electricity. Energy conservation has rarely been advanced as a reason for adopting infrastructure as a service, but maybe humanity's expanding appetite to compute has found a proper destination. Computing in the cloud reduces computing's impact on the earth.

IT is caught in a squeeze between requests for new applications, services, and device support and demands from upper management to keep budgets lean, staffing light, and operations tight. These are irreconcilable objectives as long as we spend the vast majority of our resources on legacy services. Read our report now. (Free registration required.)

Comment  | 
Print  | 
More Insights
Multicloud Infrastructure & Application Management
Multicloud Infrastructure & Application Management
Enterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest, Dec. 9, 2014
Apps will make or break the tablet as a work device, but don't shortchange critical factors related to hardware, security, peripherals, and integration.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
Join us for a roundup of the top stories on InformationWeek.com for the week of December 7, 2014. Be here for the show and for the incredible Friday Afternoon Conversation that runs beside the program!
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.