Data Centers Slash Energy Usage
Modern data centers use open and green tech to respond faster to customer needs while cutting 40% of electricity use.
CES 2015 Preview: 8 Hot Trends
CES 2015 Preview: 8 Hot Trends (Click image for larger view and slideshow.)
We've become accustomed to watching the rapid evolution of components that go into networking, computing, and storage. Advances in components and much more helped data centers become one of the fastest-evolving areas in 2014.
Power supply and distribution, cooling, and new cloud-oriented server design for data centers have all contributed to the advances. Google, Amazon, Facebook, and Microsoft have previously been acknowledged as innovators in new, cloud-oriented data center design. But last year, conventional enterprises joined in the innovation implementation. From Fidelity Investment's discrete, one-megawatt-room Centercore data center design to eBay's off-the-grid, self-reliant approach, data centers are now taking forms that are giant steps ahead of their predecessors.
In September, Fidelity opened its second Centercore implementation after introducing a 500-megawatt proof-of-concept in Raleigh, Va. The modules, or steel rooms, can be attached horizontally or stacked vertically, unlike their shipping container predecessors. Adding a just-in-time "core" unit expands the data center by the amount needed, eliminating the need to overbuild for many years ahead.
Fidelity's Centercore design balances compute, networking, and storage in proportions that meet Fidelity's needs, and that's a key element of the new design: It aims to leave no space or available electrical power unused. VP of data centers Eric Wells explained that in older data centers, "we found a lot of stranded power and IT capacity, where the infrastructure couldn't take full advantage of the resources available to it because of a crowding together of the wrong mix of elements." More efficient use of combined resources will lead to an expected 40% savings in electricity, even though the new data center will rely on chillers when necessary, an energy-hungry element eschewed in Facebook's most recent facilities.
[Want more on Facebook's Open Compute Project? See Open Source Cloud Hardware Grows Up Fast.]
The Centercore units can be snapped together like Lego blocks, each with its own power distribution, fire suppression, and security. Sliding doors can open one room to another, or seal it off to the outside.
EBay, on the other hand, concentrated on 100% uptime, survivability, and independence from the power grid when it built its new data center in South Jordan, Utah. Since 2013, it's been generating electricity on site, using fuel cells powered by natural gas. The oil and gas industry's bounty of natural gas through fracking shale formations has led to the first facility fully powered by fuel cells. One advantage: Unlike grid electricity, you can lock in natural gas deliveries at a set price for the next 15 years, according to eBay's Dean Nelson, VP of global foundation services.
Figure 1: (Image: Wikimedia Commons)
In the South Jordan facility, the backup power supply is not on-site diesel generators or a basement room full of lead-acid batteries. It's the Utah power grid itself, which otherwise goes unused.
At its new data center in Maiden, N.C., Apple also relies on a 5-megawatt fuel cell system for part of its electrical supply. Both Apple's and eBay's fuel cell systems were designed by Bloom Energy. Apple also uses a 100-acre solar farm at the site, plus regional wind, solar and bio-gas generation options.
Better power distribution
Power distribution inside data centers used to be at the commercial standard of 220 or 110 volts. By distributing power at 400 volts to local transformers, which step it down to 220 or 230 volts, data center operators save 2% of the power that is otherwise normally lost in transmission. Modern wholesale data centers such as those built by Vantage in Santa Clara, Calif., use such a distribution system, with Facebook and Apple on the record as doing so as well. All the large Web companies in all probability have used such an approach for several years.
Apple, Google, Facebook, and Microsoft all have pledged to either produce their own energy for their data centers or use green energy sources to reduce their corporate carbon footprint. Facebook's new data center in Lulea, Sweden, relies on renewable hydropower in the northern part of the country. Effective power usage, as expressed by the power usage effectiveness (PUE) measure and reliance on renewable, low-impact sources, is part of the
modern data center arms race among companies. Facebook achieves an admirable PUE of 1.07 in the high desert climate of Prineville, Ore, by using no electricity for chillers (air conditioners), relying instead on the evaporation of water dripped onto screens.
More flexible hardware
The Open Compute Project founded by Facebook scored a major advance in 2014 with the release of a reference model for an open-source top-of-rack switch. Facebook has implemented the switch in its fourth and most recently built data center, in Altoona, Iowa.
Top-of-rack switches are an important element of the modern data center, taking up a lot of real estate to provide intra-data center switching. Broadcom, Intel, Mellanox, and Accton all have submitted ToR designs. Facebook calls its ToR the Wedge, derived from the "Group Hug" motherboard designed in the Open Compute Project. The Wedge switch can run chips from either Intel, AMD, or ARM. Its network operating system, FBOSS, is derived from Linux, giving Facebook two separate, open source components that it can modify to suit its needs.
"With FBOSS, Facebook can monitor the network hardware at a deep level to keep its systems running at peak performance at all times," wrote Tom Hollingsworth, a former VAR and current blogger with Gestalt IT Media, in InformationWeek on July 16. Wedge and FBOSS can be configured to match the needs of a specific application in the data center. If an application running on a rack needs 24 ports for its connected servers, the Wedge and FBOSS combo can be configured to provide it, with the full hardware resource devoted to that purpose.
Open Compute also is addressing smaller obstacles to a software-defined data center. Canonical and Cumulus Networks collaborated on the Open Network Install Environment to provide an open source path for laying down a network operating system. By using ONIE code, servers can be booted into a network without using proprietary approaches. The commonly used Preboot eXecution Environment approach, unlike ONIE, has a license fee for each server's network interface card added to the network via PXE. "By removing this cost we have an opportunity to yet again drive differentiation and cost out of the hardware we procure," wrote bare metal advocates on the OpenCompute.org site.
Chips for the cloud
Chips designed for cloud data center use are another new development in modern data center design.
On Nov. 13 at Amazon Web Services' annual Re:Invent show, CTO Werner Vogels showed off an Intel Xeon chip, a custom version of the Haswell processor, that will be used to power AWS's largest C4 virtual machines. It uses version three cores of the E5-2666 processor, running between 2.9 and 3.5 GHz. It's a design for compute-intensive applications and is intended to support virtual machines with up to 36 virtual CPUs and 60 GB of RAM.
"These (C4) instances are designed to deliver the highest level of processor performance on EC2. If you've got the workload, we've got the instance!" blogged AWS chief evangelist Jeff Barr. Thus the cloud, originally based on thousands of copies of standardized parts, has come around to using parts customized for its own purposes. It shows how the emphasis of computing has shifted toward the cloud as the genesis of designs instead of following the lead of powerful x86 desktops and servers.
With cloud buyers promising to be a thriving segment of the market as PC growth slows, look for more custom chips to be produced for different suppliers. The possibility is scarcely lost on Intel, which sees more of the chip market rest in the hands of the cloud suppliers each year.
Last April, Google's senior director of hardware platforms Gordon McKean, chairman of the OpenPower Foundation, posted a photo of a Google motherboard based on IBM's Power8 chip. Google's willingness to chair IBM's initiative to make the Power architecture more open is an expression of interest in Power's capabilities, but it's unknown to what extent Google uses Power in its data centers.
Amazon officials have talked publicly of an ARM motherboard for future servers. It's not known whether Google and Amazon might implement an alternative architecture in their infrastructures, which are still believed to be uniformly x86-based. It's possible they will, although there are obstacles that seem to make such a move costly. One is software incompatibility. It's also possible both are simply keeping some negotiating power with Intel before they sit down for their next big chip contract.
Andrew Feldman, corporate VP of AMD, said the energy-sipping ARM design is the architecture of the future. At the Open Compute Summit in January, he said: "By 2019, ARM will command 25% of the server market," and custom ARM CPUs "will be the norm for mega-datacenters," such as those used by Facebook, Microsoft, Google, and Amazon.
The momentum in component design has shifted toward the needs of large data centers, and some of the most ambitious enterprises are following the footsteps of the big Web companies in customizing both the server, the top-of-rack switch and the data center design itself. Reduced power usage, reduced operational staff, and reduced costs are a few of the main advantages of adopting these changes.
Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization's IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access & Conference Passes.
About the Author
You May Also Like