But the company is more concerned with providing the power to pursue answers to "the grand challenges" than with raw speed, said Colin Parris, general manager of IBM Power Systems. Grand challenges, or questions, include how to predict earthquakes in time to take precautions.
"We actually do the supercomputing capability because we're doing the grand challenges," Parris said. "We don't set out to win one of these; if what we do results in our client gaining the top spot, that's perfect but that wasn't our intention."
The IBM Sequoia BlueGene/Q supercomputer, installed at the Department of Energy's Lawrence Livermore National Laboratory, runs 16.32 petaflops, using 1.6 million compute cores in 96 racks, each roughly the size of a large refrigerator, Parris said.
[ Learn more about government supercomputers. Read Feds Detail Supercomputing's Future. ]
To grasp how fast that is, he said, "If you have to understand a drug interaction on the heart, on a one-petaflop computer it would take two years for one simulation; for a 10-petaflop it would drop to two days. A 16-petaflop can do it now in a few hours."
By comparison, the second-ranked computer on the list, Fujitsu's "K Computer" at the RIKEN Advanced Institute for Computational Science in Kobe, Japan, runs at 10.51 petaflops, and No. 3, the Mira supercomputer--another IBM BlueGene/Q system, located at Argonne National Laboratory in Illinois--runs at 8.15 petaflops.
In addition to Sequoia at No. 1 and Mira in third, two other supercomputers owned by the federal government made the top 20: Jaguar, at Oak Ridge National Laboratory in Tennessee, was sixth, and Cielo, jointly operated by Sandia and Los Alamos national laboratories, ranked fifteenth.
Perhaps more significantly, the Sequoia supercomputer's water cooling system is almost 2.5 times more energy efficient than air cooling, and will save millions of dollars in energy costs.
"The issue is that most supercomputers are air-cooled, and they can only be so efficient," Parris said. "You have to force the air between the processors themselves and the air itself has to be cold. You have to have the space, so the supercomputers are getting bigger and bigger, and you have to get very cold air into these very tight spaces."
By using water as the coolant extracting the heat from the processors, less space is needed between components and the need for air conditioning eases, he said.
The energy savings are a major benefit for Lawrence Livermore.
"It's been a 10-year journey. This represents the culmination of it; it's probably my last machine, so it's significant to me personally," said Michel McCoy, program director for advanced simulation and computing at Lawrence Livermore. "I'm also the computer center director--looking at the power bills going up year and after year, it was pretty easy to say we're going to be spending our whole budget on power unless we do something about that."
One of the projects Sequoia will be used for at Lawrence Livermore is to help maintain the safety and security of the U.S. national stockpile of nuclear weapons, McCoy said.
There are "kind of two ways to use the machine. One is for very, very large weapons engineering [and] scientific calculations, such as large molecular dynamics," McCoy said. Then sometimes "those results are used to calibrate or inform another class of calculation, integrated design calculations, [which] are many packages working together, a million or more lines of code, with exquisite synchronization."
Sequoia will run uncertainty quantification suites simultaneously, perhaps 20 to 40 simulations that are very slightly different from each other, looking for the variances created by the differences. Then it will run a much more demanding 2-D or 3-D calculation to compare to the others, to confirm the validity of the calculations at the lower resolution, McCoy said.
As a resource, Sequoia will be shared among Lawrence Livermore, Los Alamos, and Sandia national laboratories. Representatives of the three facilities will meet every six months to decide which projects have priority to use its capabilities. "There are two criteria: the project's importance and the ability to use the resource effectively," McCoy said.
The TOP 500 list of supercomputers is compiled annually by the University of Mannheim, Lawrence Berkeley National Laboratory, and the University of Tennessee in Knoxville.
The Office of Management and Budget demands that federal agencies tap into a more efficient IT delivery model. The new Shared Services Mandate issue of InformationWeek Government explains how they're doing it. Also in this issue: Uncle Sam should develop an IT savings dashboard that shows the returns on its multibillion-dollar IT investment. (Free registration required.)