02:46 PM
Connect Directly
The Analytics Job and Salary Outlook for 2016
Jan 28, 2016
With data science and big data top-of-mind for all types of organizations, hiring analytics profes ...Read More>>

Amazon Launches High Performance Computing Service

The Cluster Compute Instances can be used to form 32-core, high performance computing clusters able to communicate at ten times the speed of standard EC2 instances.

Slideshow: Amazon's Case For Enterprise Cloud Computing
(click for larger image and for full photo gallery)
Amazon's most powerful cloud server yet, the Cluster Compute Instance, was made available Tuesday in EC2, designed for high performance computing and able to be grouped together with other Cluster servers via high speed networking.

Cluster Compute Instances became generally available July 13, said Peter DeSantis, Amazon Web Services general manager, in an interview.

DeSantis said Cluster Instances will be interconnected with 10-Gb Ethernet; nodes in a cluster will be able to communicate at ten times the speed of standard EC2 instances.

In addition, DeSantis said the Cluster instances are racked together to maximize physical proximity and minimize the distance of any communications between nodes. In the past, Elastic Compute Cloud users have had no control over where two servers that they might activate would be located; now they can direct that Cluster instances be launched to "a placement group" that ensures physical proximity, said DeSantis.

DeSantis said EC2 is now used for workloads ranging from genomic sequence analysis to financial modeling and automotive design. The Cluster Computer Instances are designed to support high performance computing tasks, including parallel processing workloads. "These customers have told us that many of their largest, most complex workloads required additional network performance," he said.

The Cluster server will be Amazon's most expensive. It will be priced at $1.60 per hour, compared to $.085 for a Small Linux server, $.34 for a Large Linux server and $.68 for an Extra Large Linux server.

At the same time, there will be substantially more resources devoted to the Cluster Instance, which will also run Linux. The CPU of a Small Linux server consists of a single EC2 compute unit, measured as 1 GHz Xeon or Opteron processor in 2007. The CPU of a Cluster Instance is the equivalent of 33.5 of such processors. The memory of Small EC2 server is limited to 1.7 GB; a Cluster Instance has 23 GB. A Small server has 160 GB of local storage versus 1,690 GB of Cluster storage.

DeSantis said Amazon Web Services wanted to start out with Cluster Instances running Linux "to guarantee performance." At a future date it will add additional operating systems, which in the past has meant the number two offering was Windows Server. Up to eight Cluster Instances may be grouped together running on current generation, four-core Intel Nehalem CPUs in two-way servers, with a total of 32 cores in the cluster. "Any EC2 customer can make use of Cluster instances," DeSantis noted, or in other words, high performance computing is available to those who self-provision a cluster and pay the fees. Amazon spokesmen said larger clusters may be built using Cluster Instances but the largest that can be automatically self-provisioned by an end user was eight instances.

DeSantis said the Cluster Instance has been optimized to efficiently use AWS' Elastic Block Storage for storing results of a computational run. The Cluster instances work will all other standard AWS services as well, such as S3 long-term storage and SimpleDB database service.

Amazon's announcement cited the Lawrence Berkeley National Laboratory as a primary facility supporting scientific research sponsored by the U.S. Department of Energy. Keith Jackson, a computer scientist at the lab, said he and other researchers had collaborated with Amazon Web Services in test driving the Cluster instances. "In our series of benchmark tests, we found our HPC applications ran 8.5 times faster on Cluster Compute Instances than the previous EC2 instance types," he said.

The announcement also quoted computer science professor David Patterson at the University of California at Berkeley as saying the Cluster instance "fills an important need among scientific computing professionals" and makes EC2 "more viable for technical computing." Patterson is the inventor of RAID storage and RISC computing.

Comment  | 
Print  | 
More Insights
2014 Next-Gen WAN Survey
2014 Next-Gen WAN Survey
While 68% say demand for WAN bandwidth will increase, just 15% are in the process of bringing new services or more capacity online now. For 26%, cost is the problem. Enter vendors from Aryaka to Cisco to Pertino, all looking to use cloud to transform how IT delivers wide-area connectivity.
Register for InformationWeek Newsletters
White Papers
Current Issue
How to Knock Down Barriers to Effective Risk Management
Risk management today is a hodgepodge of systems, siloed approaches, and poor data collection practices. That isn't how it should be.
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.