Facebook Challenges With Green, Open Source Data Center
CEO Zuckerberg unveiled how the firm borrowed key design principles from its predecessors, while at the same time advancing the state of the art.
Facebook claimed Thursday to have manufactured a more energy-conserving server for its new data center than either its predecessor data center or the servers typically constructed by mainstream server suppliers.
At a time when the clients accessing the cloud seem to be in a race to be the smallest, there's another contest going on -- to build huge data centers that use the least energy. Facebook, by publishing its server specs at OpenCompute.org, illustrated both that it had borrowed key design principles from its predecessors, while at the same time advancing the state of the art.
The result may be an ongoing arms race among the biggest companies on the Web -- Facebook, Google, Amazon.com -- to build not only the biggest but also the most efficient data centers on earth.
Facebook's Mark Zuckerberg, for example, said his firm's new Prineville, Ore., data center had achieved a power usage ratio of 1.07. (The ratio is a standard measure of compute power gained for electricity consumed, called power usage effectiveness.) Google said state of the art is a ratio of 1.2, but it's been able to achieve 1.1 in one of its centers. There's some one-upmanship evident in Facebook's announcement, at a time when Google is explicitly saying it wants to compete.
Facebook VP of technical operations Jonathan Heiliger said at an event at the Palo Alto headquarters Thursday that the new servers are 38% more energy efficient than their predecessors. Part of the gain is a result of a redesign of the data center power system itself. Facebook brings in power off the grid at a pressure of 480 volts, compared to 120-volt household current. Electrical energy is lost at each step of the distribution process from power plant to consumer due to resistance in the lines and the inevitable generation of heat as transformers step it down to useable current.
Facebook somehow found a way to execute the step-down process in a more efficient way in its new central Oregon data center.
At the same time, it feeds power to servers equipped with power-sipping motherboards of its own design. Some of the components found on a standard PC motherboard are stripped off the Facebook motherboard design. At its heart is an 85-watt, AMD dual socket Opteron 6100 or Intel Xeon processor. The design supports six disk drives and up to four fans. When it comes to server cooling, new data centers built by Google, Amazon.com, and Microsoft do not rely on air conditioning to keep a glass house at 68 degrees or less. They instead use ambient air circulated over a surface with water seeping over it or pushed through a mist. Energy is used in evaporating the water, cooling the air.
Former head of Microsoft servers and tools unit Bob Muglia said in 2009 that the process cooled the air in its then-under-construction Chicago data center by 15 degrees, and in most settings that was sufficient to keep servers at the right temperature when it was circulated through their enclosures. The new data centers run hotter than their predecessors, sometimes in the mid-90s to 98 degrees, because IT staffers don't need to be in among the servers. Facebook has clearly adopted the cooling by evaporation idea, but it didn't pioneer it.
In 2009 Google showed a video at a developers conference in which a young, skate board-equipped technician in T-shirt and shorts wheeled up to a server, removed it from the rack and inserted a new one, replacing a baffle that guided the airflow. This is not your father's data center.
Another factor that reduces electricity consumption in a Facebook data center is that its server design, unlike the typical enterprise data center server, has a single peripheral slot, where an extra network interface card or other device may be inserted, instead of four or even eight. The network cards or host bus adapters needed are built into the motherboard.
Facebook's Heiliger said they've taken the cooling principle a step further by recirculating the heated air coming off the servers and using it to heat the other parts of the building during cool months.
The motherboard also contains a hardware monitor chip that is receiving voltage readings, air inflow and outflow temperatures, and fan speed measures to maintain the internal server environment. Fan speeds can be altered to match the target temperature.
Instead of expensive backup generators and uninterruptible power supplies, six Facebook servers are connected to a 48-volt battery that cuts in if the power supply fails. Google data centers have been organized on a similar lead acid battery, reserve power basis for several years. Google has manufactured servers of its own design for several years, but never published their specifications.
In publishing its specification documents, Facebook is challenging Google's and Amazon's closed data center designs and claims the world will benefit by it doing so. At the same time Zuckerberg said Facebook had to come up with its own server design. "A lot of the stuff that the mass manufacturers were putting out wasn't exactly what we needed." The Facebook design is geared to a social networking application, although the special requirements of that app weren't spelled out.
Dell for at least three years has been studying the needs of cloud data centers and how they differ from their predecessors. It formed its Data Center Solutions business unit to cater to search engine, social networking, and Microsoft Azure cloud needs and has sold thousands of servers to such customers.
Dell announced Thursday it will produce servers for 12 cloud data centers for public and private cloud computing for its customers this year, and follow those up with 10 more next year. The servers in those data centers will be competitive with the best designs on the market, said Forrest Norrod, general manager of server platforms at Dell.
About the Author
You May Also Like