Microsoft is working on a project to produce a programmable interface for Open Compute switches, coming later this year.

Charles Babcock, Editor at Large, Cloud

March 13, 2015

6 Min Read
<p align="left">(Image: <a href="https://www.facebook.com/careers/locations/prineville" target="_blank">Facebook</a>)</p>

7 Linux Facts That Will Surprise You

7 Linux Facts That Will Surprise You


7 Linux Facts That Will Surprise You (Click image for larger view and slideshow.)

As one of the dominant companies in software, Microsoft is finding itself working with a bunch of new hardware partners, such as ASIC supplier Broadcom and switch-maker Accton, as it surges ahead with contributions to the Open Compute Project.

The Open Compute Project aims to spur development of open-source hardware using open designs that any hardware maker can use to build data center gear such as servers and switches. Initiated by major data center users such as Facebook and Wall Street banks, Open Compute has drawn involvement from some of the largest tech vendors, including most recently Cisco.

Microsoft is trying to perfect a switch abstraction interface (SAI) that would sit atop open source switches. Facebook has contributed two examples of open source switches to the OCP: its top-of-rack switch called Wedge, and its data center spine switch called 6-Pack. Other switching vendors such as Juniper have contributed the designs for a switch.

Microsoft is leading an SAI project inside Open Compute and several elements of SAI are already in place. A prototype API is available for testing.

Given Microsoft's skill in software development and programmer interfacing, the company's heavy involvement and notable progress on this project provides a potentially unsettling development to established switch builders.

Switching gear still tends to be strongly proprietary, with vendors locking down the firmware and switch operating system inside, converting what could be visible and comprehensible into an impenetrable black box, as far as customers are concerned.

[Want to learn more about momentum at the Open Compute Project? See Open Compute: Apple, Cisco Join, While HP Expands.]

With an open API sitting on the top of the switch's firmware-- the two ASIC chips that typically go into a switch and control its functions -- the switch becomes accessible to programmers seeking to get it to do different things. If the switch is scrutinizing packets with a high degree of thoroughness, looking to keep the network secure, it could be reprogrammed through the switch abstraction interface to allow high priority traffic through more quickly.

Broadcom and Mellanox have agreed to produce ASICs (or Application Specific Integrated Circuits) that will support Open Compute's switch abstraction interface. That interface allows switch builders to produce hardware that adopts their ASICs, and it's a candidate for an Open Flow style network, where the switch may do one task at the start of the workday, and another at the end.

Right now, the SAI project merely aspires to make it possible to produce a programmable, plain vanilla switch. Both Accton and Dell are working on switches containing SAI-supporting ASICS, and the hardware should be available later this year, Vaid said.

"SAI is right in the middle of where Open Compute networking is today," said Kushagra Vaid, Microsoft's general manager of server engineering, cloud, and enterprise, in an interview with InformationWeek. Once it's in place, more software development will start producing software-defined networking on top of it, following the principles of the Open Flow protocol or other approaches, he said. The goal now is to get a working switch abstraction interface in place.

But once it is in place, network managers and equipment suppliers will find "OCP has a lot of momentum," plain vanilla switching hardware will proliferate, and the disruption that many network observers have been predicting will start to occur, he said.

Dell and Cumulus Networks are already producing operating systems for such switches, and Facebook has produced one called FBoss for its top-of-rack switch.

Microsoft Server And Battery Efforts

Microsoft has been active on the server hardware front as well, though that's less surprising than its reach into networking innovation. A year ago, it contributed its version of a cloud server component, the Open Cloud Server, in a chassis that occupies 12u of a server rack. The module can be configured with dense computing or dense storage, depending on the needs of the cloud where it's being installed. Vaid pointed out that Microsoft's Azure cloud needed such a design to broker the different operations it hosts, from Bing search to Office 360, to Azure public cloud virtual machine hosting. Microsoft contributed a second, blade-based module to the project last November.

During an address at the Open Compute Summit, held in San Jose this week, Vaid pointed out how Microsoft Azure data centers have innovated in the field of providing uninterruptible power. Large data centers frequently have two rooms full of lead-acid batteries that are kept fully charged and available in case of grid power failure. The batteries can power the data center 10 to 15 minutes, enough time to get diesel generators running to provide long-lasting substitute power.

But it's not commonly known how much electricity that precautionary measure consumes. As the power is brought into the data center, some of it is converted into direct current from alternating current to provide a charge to the batteries, then converted back to AC. Anyone who's every handled a PC power supply knows both stages of the transformation generate heat, which means some of the energy is bleeding out of the system. In addition there are electricity-based monitoring and controls over the uninterruptible power supply system.

Vaid called Shaun Harris, director of engineering, on stage to illustrate how Microsoft took a new approach to this power problem. Harris was carrying a self-powered, handyman's drill with a lithium battery pack at the base of the handle. The cylindrical batteries have been adapted to a similar pack that's built into each Azure cloud server, moving the UPS out of a central room and onto the server. Eighteen lithium batteries go into the pack.

Google was an early innovator in the field, but hasn't disclosed all the details of how it builds servers.

Recharging lead-acid batteries siphoned off 8% of the power coming into the data center, Vaid said. By switching to lithium battery packs on servers, Microsoft improved its PUE rating by 15%, a large gain in the battle to make data centers more efficient. The typical enterprise data center consumes almost twice as much power as needed by the equipment doing the actual computing functions, which translates to a PUE of 1.8 or 1.9. The most efficient cloud data centers, such as those operated by Facebook and Microsoft, lower their PUEs to 1.06 or 1.07. PUE stands for power utilization effectiveness. A measure of 1.0 would mean that all the power brought into the data center is used to compute.

Attend Interop Las Vegas, the leading independent technology conference and expo series designed to inspire, inform, and connect the world's IT community. In 2015, look for all new programs, networking opportunities, and classes that will help you set your organization’s IT action plan. It happens April 27 to May 1. Register with Discount Code MPOIWK for $200 off Total Access Conference Passes.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights