Microsoft Takes Supercomputing To The Cloud 2

Buried beneath the bland verbiage announcing Microsoft's Technical Computing Initiative on Monday is some really exciting stuff.

Alexander Wolfe, Contributor

May 22, 2010

3 Min Read

Buried beneath the bland verbiage announcing Microsoft's Technical Computing Initiative on Monday is some really exciting stuff. As Bill Hilf, Redmond's general manager of technical computing, explained it to me, Microsoft is bringing burst- and cluster-computing capability to its Windows Azure platform. The upshot is that anyone will be able to access HPC in the cloud. HPC stands for High-Performance Computing. That's the politically correct acronym for what we used to call supercomputing. Microsoft itself has long offered Windows HPC Server as its operating system in support of highly parallel and cluster-computing systems.

The new initiative doesn't focus on Windows HPC Server, per se, which was what I'd been expecting to hear when Microsoft called to corral me for a phone call about the announcement. Instead, it's about enabling users to access compute cycles -- lots of them, as in, HPC-class performance -- via its Azure cloud computing service.

As Microsoft laid it out in an e-mail, there are three specific areas of focus:

Cloud: Bringing technical computing power to scientists, engineers and analysts through cloud computing to help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. Supercomputing work may emerge as a “killer app” for the cloud. Easier, consistent parallel programming: Delivering new tools that will help simplify parallel development from the desktop to the cluster to the cloud.

Powerful new tools: Developing powerful, easy-to-use technical computing tools that will help significantly speed discovery. This includes working with customers and industry partners on innovative solutions that will bring our technical computing vision to life.

Trust me that this is indeed powerful stuff. As Hilf told me in a brief interview: "We've been doing HPC Server and selling infrastructure and tools into supercomputing, but there's really a much broader opportunity. What we're trying to do is democratize supercomputing, to take a capability that's been available to a fraction of users to the broader scientific computing."

In some sense, what this will do is open up what can be characterized as "supercomputing light" to a very broad group of users. There will be two main classes of customers who take advantage of this HPC-class access. The first will be those who need to augment their available capacity with access to additional, on-demand "burst" compute capacity.

The second group, according to Hilf, "is the broad base of users further down the pyramid. People who will never have a cluster, but may want to have the capability exposed to them in the desktop."

OK, so when you deconstruct this stuff, you have to ask yourself where one draws the line between true HPC and just needing a bunch of additional capacity. If you look at it that way, it's not a stretch to say that perhaps many of the users of this service won't be traditional HPC customers, but rather (as Hilf admitted) users lower down the rung who need a little extra umph.

OTOH, as Hilf put it: "We have a lot of traditional HPC customers who are looking at the cloud as a cost savings."

Which makes perfect sense. Whether this will make such traditional high-end users more like to postpone purchase of a new 4P server or cluster in favor of additional cloud capacity is another issue entirely, one which will be interesting to follow in the months to come.

Read more about:

20102010

About the Author(s)

Alexander Wolfe

Contributor

Alexander Wolfe is a former editor for InformationWeek.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights