QLogic Sees Supercomputers In The Cloud
The real growth market for high-performance computing is the cloud, not enterprise data centers, according to QLogic, which introduced software to make HPC clusters faster, more scalable, and more reliable.
InfiniBand Fabric Suite (IFS) 7.0 makes HPC clusters faster, more scalable, and more reliable when combined with its existing TrueScale InfiniBand I/O, QLogic said. It's already announced one customer using the system, the National Nuclear Security Administration, whose Sierra supercomputer will link together 20,000 processors with TrueScale. Planned to be running by September, Sierra is designed for simulating nuclear weapons tests and will be deployed at Lawrence Livermore, Sandia, and Los Alamos National Labs.
More Hardware Insights
- The Critical Importance of High Performance Data Integration for Big Data Analytics
- Forrester Total Economic Impact™ of running Analytics workloads on IBM Power Systems
- Don't Get Stuck on your Virtualization Journey: Where to Focus Next
- Data center consolidation restructures your IT costs for continued growth: New discovery tools determine logical and physical move dependencies to help limit risk
But according to QLogic, supercomputers aren't just for the likes of Los Alamos anymore. "The enterprise market is three times bigger, but HPC is where the growth is," said Joe Yaworski, director of product and solutions marketing at QLogic, in an interview. "If you look at what's happening in the enterprise space, it's consolidation, moving applications into the cloud." The cloud providers that host these applications have much higher computing requirements than a typical enterprise, meaning they increasingly turn to HPC rather than typical virtual servers. That means a diminished role for the enterprise data center and an enhanced one for HPC.
Many enterprises also are building their own supercomputers or renting time from HPC services, QLogic said, because the constant increase in performance makes computer simulation increasingly more cost effective than building things in the real world. The most vivid demonstration is the rise of CGI in movie making, but the same thing is taking place everywhere. "If there's a single technology that cuts across all industries, it is high performance computing," said Yaworski. Whereas supercomputers were once justifiable mainly to replace physical experiments in nuclear physics or protein folding, they're now used in applications from soap powder packaging design to streamlining potato chip manufacturing. In the former case, they simulate the strength of different boxes when dropped under varying conditions; in the later, they optimize the flow of chips through a production line.
To help bring HPC to the masses, QLogic has introduced a variety of new features in IFS 7.0, many of which will be familiar to IT managers accustomed to Internet routing. Quality-of-service looks like the most important, enabling IT to prioritize particular loads--important as HPC moves from single-purpose clusters to systems that handle many virtual machines. Others are congestion control and automated fault detection, important as HPC clusters require tight synchronization between processors. "A single slow node can slow down an entire cluster, because each node waits to get data from all other nodes," said Yaworski. "It's the weakest link." To avoid this, the software can automatically diagnose problems such as poor PCI connections and incorrectly installed processors as well as hardware failure.
QLogic's has open-sourced its HPC software, meaning other vendors will be able to use it if necessary. However, it said none have done so. Open source is a requirement for large HPC customers, who want to be able to tweak the software themselves rather than rely on an outside vendor.
Data centers face increased resource demands and flat budgets. In this report, we show you steps you can take today to squeeze more from what you have, and also provide guidance on building a next-generation data center. Download it now.