Hardware & Infrastructure
11:20 AM

Power Play

A Japanese supercomputer ignites debate over how the government should seed research for the fastest machines--and whether any of it helps businesses

But it still doesn't come close to the Earth Simulator, so a slew of efforts are under way to reach that system's stratospheric performance. In November, the Energy Department disclosed a $290 million deal with IBM to build two supercomputers at Lawrence Livermore National Laboratory. ASCI Purple, a 100-teraflop machine based on IBM's next-generation Power 5 processor, and Blue Gene/L, slated to generate 360 teraflops, are scheduled for delivery late next year. Earlier this month, IBM signed a nine-year, $200 million contract with the Commerce Department's National Weather Service to host a 7.3-teraflop system that could expand to crunch 100 trillion operations a second by the end of the decade.

Even vector systems are hot again. Cray last year delivered its government-funded X1 vector-processing system; Oak Ridge National Lab is an early customer. IBM is proposing a system, code-named Blue Planet, for delivery in 2005 based on its virtual vector architecture, code-named Viva, that could run at a peak speed of 160 teraflops--and sustain perhaps a quarter of that.

In government, the watchword is speeding "time to insight"--the lag between launching a job and getting an answer. "Our applications challenge all computers on which we try to run them," says Bill Reed, outgoing director of the office of advanced simulation and computing at the National Nuclear Security Agency, which manages the ASCI program. NNSA's apps, which predict the explosive force of nuclear weapons, can harness anywhere from a few percent to about 30% of the power of the computers that host them. Emerging designs such as Blue Gene/L and Cray's X1 machine could improve performance.

"We're working toward petaflop computing and see no way to get there without resorting to tens of thousands of processors," says Reed, who'll retire this week. "We've become identified with the commodity off-the-shelf design philosophy. [But] there may be other options for us in the future."

Yet that commodity approach has a lot of advocates in the private sector, where some supercomputing managers are skeptical that superadvanced, federally funded R&D jibes with their goals. "The government installations are under enormous pressure to demonstrate their relevance and value to industry," says Pete Bradley, associate fellow for high-intensity computing at Pratt & Whitney, the $7.6 billion-a-year aircraft-engine division of United Technologies Corp. Pratt & Whitney has been working with NASA and Argonne National Labs on high-performance computing projects and has used Argonne-developed software code. But the national conversation emphasizes speeds and feeds, not return on investment, he says. Meanwhile, Pratt has moved "pretty much everything" in supercomputing to Intel Xeon-based PCs running Linux. "I would like to see the debate in terms of what problems can be solved, versus the internal measurements of the computer," Bradley says.

Chrysler Group's Picklo says that in the '90s, an hour of computing time on a Cray machine cost about $100, versus $10 on a RISC system. Now, with Intel clusters, "we're flirting with $1 an hour," he says. "It almost seems like business and government no longer have the same objectives."

Dave Turek, VP of deep computing at IBM. Photo by Kyoko Hamada.

Projects such as the Earth Simulator may have trouble finding commercial uses, says Turek, VP of deep computing at IBM.
Not everyone on the vendor side is convinced, either. The Earth Simulator was designed for specialized applications, requiring much tuning for performance, says Dave Turek, VP of deep computing at IBM. "The commercialization of this will run into the kinds of issues that prevailed for the last 10 years," Turek says. "The problem proponents of vector architecture face is while they may find applications, how do you craft a business around that technology? It was the government back in the mid-'90s that said no more oddball, one-off things."

In May, Gordon Bell and Jim Gray, researchers at Microsoft, argued that federal money spent on big iron at national supercomputing centers would be better allocated to scientific research teams rewriting apps for inexpensive clusters. Approaches such as grid computing can federate these clusters into even more powerful systems, the two have said. Meanwhile, commodity king Dell Computer has won high-performance computing contracts at Boeing, Pratt & Whitney, and Stanford University. The push for power comes at a time when HP, IBM, and Sun are putting more emphasis on networking, integration, and technical services than on building the next fastest box.

"There are people in the government who dearly love their vector machines, and feel that the sudden focus on [massively parallel systems] is diluting vector," Pratt & Whitney's Bradley says. "I don't know if I believe that. This is a politically challenging as well as a technically challenging debate." One that will take many years--and dollars--to play out.

Illustration by Steven Lyons
Photo of Picklo by Bob Stefko
Photo of Turek by Kyoko Hamada

3 of 3
Comment  | 
Print  | 
More Insights
Register for InformationWeek Newsletters
White Papers
Current Issue
Twitter Feed
InformationWeek Radio
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.