Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.
July 6, 2010
2 Min Read
Continuing its broad, longstanding effort to encourage energy-efficient computing, Google researchers have proposed a new approach to data center network design for large clusters (10,000+ servers) that could lead to significant energy and cost savings.
In "Energy Proportional Datacenter Networks," a paper published last month by Google researchers Dennis Abts, Peter Klausler, Hong Liu, Michael R. Marty, and Philip M. Wells, the authors suggest several ways to build data center networks that use power in proportion to the volume of data being transmitted, rather than running at high-power during periods of both high and low utilization.
"By operating the network links at a data rate in proportion to the offered traffic intensity, an energy proportional data center network can further improve on the energy savings offered by the flattened butterfly topology," the paper states. "Modern switch chips capable of congestion sensing and adaptive routing already have the essential ingredients to make energy-proportional communication viable for large-scale clusters."
This approach, the authors claim, "can reduce operating expenses for a large-scale cluster by up to $3M over a four-year lifetime."
Key to the researchers' vision is the use of a flattened butterfly network topology rather than the folded-Clos topology. The flattened butterfly configuration, the researchers claim, results in lower power at full utilization and is better suited for techniques that aim to increase the dynamic range of network power -- a comparison of power used while idle to power used during full utilization.
Comparing a cluster with a flattened butterfly network to a similarly configured folded-Clos network, with an assumed electricity rate of $0.07 per kW hour and an average Power Usage Effectiveness (PUE) rating of 1.6, the researchers state that the flattened butterfly topology alone should result in $1.6 million of energy savings over the four-year lifetime of the cluster.
Further energy savings are projected through dynamically tuning network links to match required performance.
The authors suggest there's an opportunity for makers of network switching chips to deliver products capable of energy-proportional switching. Presently, a network link configured for 2.5 Gb/s consumes 42% of the power of a link configured for 40 Gb/s. Properly optimized, the energy usage should be more like 6.25% when running at 2.5 Gb/s.
About the Author(s)
Editor at Large, Enterprise Mobility
Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful master's degree in film production. He wrote the original treatment for 3DO's Killing Time, a short story that appeared in On Spec, and the screenplay for an independent film called The Hanged Man, which he would later direct. He's the author of a science fiction novel, Reflecting Fires, and a sadly neglected blog, Lot 49. His iPhone game, Blocfall, is available through the iTunes App Store. His wife is a talented jazz singer; he does not sing, which is for the best.
You May Also Like
NIST Cybersecurity Framework 2.0: Changes, impacts, and opportunities for your InfoSec program
Edge Computing Bridges IT and OT People, Process, and Technology
Hybrid Mesh Firewall: An Essential Solution for Today's Distributed Enterprise
IT Service Management Vendor Rankings & Quadrant
Checklist: Top 6 Considerations to Optimize Your Digital Acceleration Security Spend