informa
/
News

Data Centers Slash Energy Usage

Modern data centers use open and green tech to respond faster to customer needs while cutting 40% of electricity use.

modern data center arms race among companies. Facebook achieves an admirable PUE of 1.07 in the high desert climate of Prineville, Ore, by using no electricity for chillers (air conditioners), relying instead on the evaporation of water dripped onto screens.

More flexible hardware
The Open Compute Project founded by Facebook scored a major advance in 2014 with the release of a reference model for an open-source top-of-rack switch. Facebook has implemented the switch in its fourth and most recently built data center, in Altoona, Iowa.

Top-of-rack switches are an important element of the modern data center, taking up a lot of real estate to provide intra-data center switching. Broadcom, Intel, Mellanox, and Accton all have submitted ToR designs. Facebook calls its ToR the Wedge, derived from the "Group Hug" motherboard designed in the Open Compute Project. The Wedge switch can run chips from either Intel, AMD, or ARM. Its network operating system, FBOSS, is derived from Linux, giving Facebook two separate, open source components that it can modify to suit its needs.

"With FBOSS, Facebook can monitor the network hardware at a deep level to keep its systems running at peak performance at all times," wrote Tom Hollingsworth, a former VAR and current blogger with Gestalt IT Media, in InformationWeek on July 16. Wedge and FBOSS can be configured to match the needs of a specific application in the data center. If an application running on a rack needs 24 ports for its connected servers, the Wedge and FBOSS combo can be configured to provide it, with the full hardware resource devoted to that purpose.

Open Compute also is addressing smaller obstacles to a software-defined data center. Canonical and Cumulus Networks collaborated on the Open Network Install Environment to provide an open source path for laying down a network operating system. By using ONIE code, servers can be booted into a network without using proprietary approaches. The commonly used Preboot eXecution Environment approach, unlike ONIE, has a license fee for each server's network interface card added to the network via PXE. "By removing this cost we have an opportunity to yet again drive differentiation and cost out of the hardware we procure," wrote bare metal advocates on the OpenCompute.org site.

Chips for the cloud
Chips designed for cloud data center use are another new development in modern data center design.

On Nov. 13 at Amazon Web Services' annual Re:Invent show, CTO Werner Vogels showed off an Intel Xeon chip, a custom version of the Haswell processor, that will be used to power AWS's largest C4 virtual machines. It uses version three cores of the E5-2666 processor, running between 2.9 and 3.5 GHz. It's a design for compute-intensive applications and is intended to support virtual machines with up to 36 virtual CPUs and 60 GB of RAM.

"These (C4) instances are designed to deliver the highest level of processor performance on EC2. If you've got the workload, we've got the instance!" blogged AWS chief evangelist Jeff Barr. Thus the cloud, originally based on thousands of copies of standardized parts, has come around to using parts customized for its own purposes. It shows how the emphasis of computing has shifted toward the cloud as the genesis of designs instead of following the lead of powerful x86 desktops and servers.

With cloud buyers promising to be a thriving segment of the market as PC growth slows, look for more custom chips to be produced for different suppliers. The possibility is scarcely lost on Intel, which sees more of the chip market rest in the hands of the cloud suppliers each year.

Last April, Google's senior director of hardware platforms Gordon McKean, chairman of the OpenPower Foundation, posted a photo of a Google motherboard based on IBM's Power8 chip. Google's willingness to chair IBM's initiative to make the Power architecture more open is an expression of interest in Power's capabilities, but it's unknown to what extent Google uses Power in its data centers.

Amazon officials have talked publicly of an ARM motherboard for future servers. It's not known whether Google and Amazon might implement an alternative architecture in their infrastructures, which are still believed to be uniformly x86-based. It's possible they will, although there are obstacles that seem to make such a move costly. One is software incompatibility. It's also possible both are simply keeping some negotiating power with Intel before they sit down for their next big chip contract.

Andrew Feldman, corporate VP of AMD, said the energy-sipping ARM design is the architecture of the future. At the Open Compute Summit in January, he said: "By 2019, ARM will command 25% of the server market," and custom ARM CPUs "will be the norm for mega-datacenters," such as those used by Facebook, Microsoft, Google, and Amazon.

The momentum in component design has shifted toward the needs of large data centers, and some of the most ambitious enterprises are following the footsteps of the big Web companies in customizing both the server, the top-of-rack switch and the data center design itself. Reduced power usage, reduced operational staff, and reduced costs are a few of the main advantages of adopting these changes.

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization's IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.