Microsoft Makes Azure Server Design Open-Source
Open-source hardware movement wins a surprise supporter. Microsoft joins Open Compute project, shares its server design ideas.
Microsoft In 2013: 7 Lessons Learned
Microsoft In 2013: 7 Lessons Learned (Cick image for larger view and for slideshow.)
The open-source hardware movement got a new and unlikely convert Tuesday in the form of Microsoft, the company that so vehemently resisted the admission of Linux into the datacenter. Microsoft's membership was announced at the fifth Open Compute Summit, which convened in San Jose, Calif., on Tuesday.
Microsoft has a lot to contribute to an open-source hardware organization, said Bill Laing, corporate VP of the company's Server and Cloud Division. "Microsoft is pleased and proud to be here today. We've been inspired by the example of Facebook and all the contributors to Open Compute," Laing told about 3,300 attendees at the San Jose Convention Center.
Mirosoft has invested more than $1 billion in building a global presence with Azure cloud datacenters. It is running a million servers, he said. Unlike Facebook, however, Microsoft isn't running Azure only from "mega-datacenters." Rather it has a variety of facilities, from a building with a capacity for 300,000 servers outside Chicago and a purpose-built cloud center in Quincy, Wash., to much smaller facilities in other parts of the world, including some co-location facilities, Laing said in an interview after his speech.
With these different types of locations in mind, Microsoft had to design a server that would meet multiple demands in a variety of facilities. In some cases it would be the Bing search engine; others may include Office 365, Xbox online gaming, or end-user customers running their own workloads. The Microsoft design, soon to be submitted to OCP, doesn't look anything like the stripped-down motherboard Facebook uses to equip its datacenters.
Laing told attendees Microsoft was submitting the design specifications for a 12u combination of server blades and disk blade; four of the 12u units can be fit into a standard datacenter rack. The design allows Microsoft to order units that are either compute intensive or storage intensive, depending on the needs it is experiencing at a particular location.
"We think it fits very well with the overall strategy of OCP," Laing told the group. He said Microsoft will release all its design, documentation, and deployment information on the unit.
"The benefits compared to traditional servers include a 40% reduction in cost around its simplicity, a 15% gain in power efficiency, and a 50% improvement in deployment and service times."
Laing joined Microsoft as a datacenter architect in 1999 and wasn't a participant in jibes and denunciations of Linux coming from the top of the company a few years later. Nevertheless, it sounded odd as a high-ranking official said Microsoft was willing to do for open-source hardware what it had once done with Windows Server -- "drive the transformation of the enterprise datacenter."
The Open Compute movement also got some help from another unlikely source. It was a new server rack design, not from one of the many hardware manufacturers that make up OCP's membership, but from a major Boston financial services company, Fidelity Investments.
Fidelity, an Open Compute enthusiast, has already installed servers based on Open Compute designs. OCP designs make up about 33% of the servers in its datacenters. They are being produced by a new breed of suppliers known as ODMs, or open design manufacturers.
Fidelity Tuesday submitted its Open Bridge Rack design to hold OCP servers. It's a rack that converts a standard enterprise server rack into one that's compliant with the Open Compute's 21-inch Open Rack spec. The specification originated for servers going into Facebook's ultra-modern datacenters during the last two years. Unlike standard 19-inch enterprise datacenter racks, Open Rack requires a 21-inch opening in the front to allow extra-wide device trays.
Facebook's datacenters need a broader rack front to increase air flow over the dense components packed into its racks. Facebook uses ambient air flow to cool equipment rather than chillers and air conditioning. Its Prineville, Ore., datacenter complex, as a result, has one of the best power usage effectiveness (PUE) ratings in the industry, at 1.06-1.08. That means most of the power coming into the building is actually used to drive computing equipment, not lighting, air conditioning, and other auxiliary systems. The enterprise datacenter typically uses as much power on those extra functions as on computing, resulting in a PUE of 2.0.
At the same time, the 21-inch Open Rack represents one of the barriers to wider adoption of open-source hardware concepts. What's good for a purpose-built cloud datacenter won't fit into more heterogeneous enterprise datacenters.
Brian Obermesser, director of datacenter architecture for Fidelity, presented the Open Bridge Rack to OCP's Open Rack working group and explained why Fidelity had produced it. "Our desire was not to replace racks in our datacenters on a regular basis," he said. In some cases, Fidelity needs 19-inch racks, and in others, it wants to go with the wider design.
At last January's OCP hackathon, Fidelity developers proposed a cross-over design, but attempts to implement one make the rack bulkier, heavier, and more difficult to manufacture. That's contrary to the goals of open-source hardware, which seeks plainer, simpler, and cheaper designs. Its design team found it could produce a rack that served both needs by reinstalling the rack's upright supports using "dual-sided, adaptable rails" that would allow it to function as either a 19- or 21-inch model, Obermesser said.
The Bridge rack can also handle power delivered to it at more than 400 volts, as Facebook does in its datacenters, or 208 volts, the level at which power is delivered in Fidelity's "legacy facilities." Most enterprise datacenters use the 208-volt level.
Obermesser said the Fidelity design has been received with enthusiasm by other financial services companies as they learned about it. Like them, Fidelity "already has racks on our floor. We don't want to throw them away."
Even Facebook officials have said it's possible to get carried away with rack design and stray too far from the tried and true model in use in most enterprise. Facebook started out with a new rack design on steroids, built to hold 5,000 pounds of cables, wiring, and devices. The typical loaded rack is handled by three or four movers in a datacenter. Facebook called out 10 workers to maneuver its prototype rack off the loading dock in Prineville and down a gently sloping ramp inside the datacenter. In transit, the rack proved to have a mind of its own and accelerated down the ramp, despite the restraining efforts of the 10-man crew. No harm came from the incident, but there's still a dent in the wall at the bottom of the ramp.
About the Author
You May Also Like