Server Den: Inside HP's Converged Infrastructure - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Cloud // Cloud Storage
Commentary
2/1/2010
11:33 AM
Alexander Wolfe
Alexander Wolfe
Commentary
Connect Directly
Facebook
Twitter
RSS
E-Mail
50%
50%

Server Den: Inside HP's Converged Infrastructure

Gary Thome, chief architect of HP's Infrastructure Software and Blades group, talks power and cooling like you've never heard it before. Plus, why he thinks Hewlett-Packard's data-center play tops Cisco.

Our quest to learn about different vendors' approaches to Infrastructure 2.0, and to get beyond the hype, takes us this week to Hewlett-Packard, which has bundled its combined server, storage, and networking play under the "converged infrastructure" umbrella. In this column, I'll focus on my chat with Gary Thome [picture at right], who is chief architect of HP's Infrastructure Software and Blades group.

First, some context: HP's intention to make its converged infrastructure the centerpiece of its enterprise push was emphasized on January 13, when HP CEO Mark Hurd and Microsoft chief Steve Ballmer held a joint press conference. The three-year deal announced by the two companies, around what they call an "infrastructure-to-applications model" translates as, we're going to drive customers to Microsoft software and HP enterprise infrastructure.

This may be the most astute move yet that HP has taken to blunt the high profile Cisco has achieved with its Unified Computing System, a competing Infrastructure 2.0 play, which similarly combines servers and networking.

Cisco was secondary--though certainly not avoided--in my discussion with Thome. I primarily wanted to hear about the hardware and software guts behind HP's converged infrastructure.

Gary told me he was trained as an electrical engineer, and that quickly became apparent during our talk. (As one EE to another, I can recognize these things.) The definitive tell was that my marketing questions were met with mostly the kinds of talking-point responses one learns during media-relations training, but Thome really got passionate when we began talking power and cooling.

Now, power and cooling are generally boring subjects to hear about, but Thome piqued my interest, because he made a clear case that these areas--and the techniques HP is applying therein--are differentiators which can pay big dividends in the data center.

Long an unpleasant line item on facilities managers' budgets, the angst caused by hefty electric bills burst into public view in 2006, when AMD rented billboards in New York's Times Square and by the side of Route 101 in Silicon Valley. The publicity stunt was intended to imprint the scrappy semiconductor maker's stamp on the energy issue. (Their argument was that you could lower your data center's electric bill by using AMD Opteron-based servers.)

Putting aside AMD's skin in this game, there's clearly a point to be made. According to perhaps the most authoritative estimate around, by Jonathan Koomey of Lawrence Berkeley National Laboratory, working off of IDC-compiled numbers, server electricity use doubled between 2000 and 2005. (Those are the most recent figures.) On the bright side, there's some evidence that the shift to cloud computing is pushing overall consumption down.

Managing power and cooling is a big deal, because as data centers grow, so do electric bills. The standard figure that's tossed around is that it can cost $25 million to add a megawatt of capacity to a data center. (This is off an Uptime Institute paper, pdf here. With server racks run at 1kW - 3 KW, you're quickly talking real money if you're not careful.

But that's a macro issues. On the micro side, insofar as what HP is doing to engineer its boxes, here's what Thome had to say: "We've built into BladeSystem the ability to throttle pretty much every resource. We can throttle CPUs, voltage-regulator modules, memory, fans, power supplies, all the way down to trying to keep the power consumed as low as possible at any given time.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Previous
1 of 4
Next
Comment  | 
Print  | 
More Insights
Slideshows
10 Cyberattacks on the Rise During the Pandemic
Cynthia Harvey, Freelance Journalist, InformationWeek,  6/24/2020
News
IT Trade Shows Go Virtual: Your 2020 List of Events
Jessica Davis, Senior Editor, Enterprise Apps,  5/29/2020
Commentary
Study: Cloud Migration Gaining Momentum
John Edwards, Technology Journalist & Author,  6/22/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Key to Cloud Success: The Right Management
This IT Trend highlights some of the steps IT teams can take to keep their cloud environments running in a safe, efficient manner.
Slideshows
Flash Poll