Facebook Challenges With Green, Open Source Data Center
CEO Zuckerberg unveiled how the firm borrowed key design principles from its predecessors, while at the same time advancing the state of the art.
When it comes to server cooling, new data centers built by Google, Amazon.com, and Microsoft do not rely on air conditioning to keep a glass house at 68 degrees or less. They instead use ambient air circulated over a surface with water seeping over it or pushed through a mist. Energy is used in evaporating the water, cooling the air.
Former head of Microsoft servers and tools unit Bob Muglia said in 2009 that the process cooled the air in its then-under-construction Chicago data center by 15 degrees, and in most settings that was sufficient to keep servers at the right temperature when it was circulated through their enclosures. The new data centers run hotter than their predecessors, sometimes in the mid-90s to 98 degrees, because IT staffers don't need to be in among the servers. Facebook has clearly adopted the cooling by evaporation idea, but it didn't pioneer it.
In 2009 Google showed a video at a developers conference in which a young, skate board-equipped technician in T-shirt and shorts wheeled up to a server, removed it from the rack and inserted a new one, replacing a baffle that guided the airflow. This is not your father's data center.
Another factor that reduces electricity consumption in a Facebook data center is that its server design, unlike the typical enterprise data center server, has a single peripheral slot, where an extra network interface card or other device may be inserted, instead of four or even eight. The network cards or host bus adapters needed are built into the motherboard.
Facebook's Heiliger said they've taken the cooling principle a step further by recirculating the heated air coming off the servers and using it to heat the other parts of the building during cool months.
The motherboard also contains a hardware monitor chip that is receiving voltage readings, air inflow and outflow temperatures, and fan speed measures to maintain the internal server environment. Fan speeds can be altered to match the target temperature.
Instead of expensive backup generators and uninterruptible power supplies, six Facebook servers are connected to a 48-volt battery that cuts in if the power supply fails. Google data centers have been organized on a similar lead acid battery, reserve power basis for several years. Google has manufactured servers of its own design for several years, but never published their specifications.
In publishing its specification documents, Facebook is challenging Google's and Amazon's closed data center designs and claims the world will benefit by it doing so. At the same time Zuckerberg said Facebook had to come up with its own server design. "A lot of the stuff that the mass manufacturers were putting out wasn't exactly what we needed." The Facebook design is geared to a social networking application, although the special requirements of that app weren't spelled out.
Dell for at least three years has been studying the needs of cloud data centers and how they differ from their predecessors. It formed its Data Center Solutions business unit to cater to search engine, social networking, and Microsoft Azure cloud needs and has sold thousands of servers to such customers.
Dell announced Thursday it will produce servers for 12 cloud data centers for public and private cloud computing for its customers this year, and follow those up with 10 more next year. The servers in those data centers will be competitive with the best designs on the market, said Forrest Norrod, general manager of server platforms at Dell.
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.