Hardware & Infrastructure
News
6/20/2003
11:20 AM
50%
50%

Power Play

A Japanese supercomputer ignites debate over how the government should seed research for the fastest machines--and whether any of it helps businesses

Chrysler Group's recent hits--the Neon, Sebring, and PT Cruiser--are models of affordable style. Sure, they lack the world-class engineering of the Mercedes made by parent company DaimlerChrysler AG. But they've got some snap, do the job, and no one balks at the price. Same goes for Chrysler's Auburn Hills, Mich., data center.

The automaker's most demanding applications--for simulating crashes, road noise, and aerodynamics--run on high-performance computers made from off-the-shelf RISC chips and clusters of Pentium- and Itanium-based PCs. They're maybe half the speed of the traditional Cray supercomputers they replaced but cost pennies on the dollar compared with the Crays. "We're doing our part" to help Chrysler cut costs, says John Picklo, high-performance computing manager.

Picklo figures quality-assurance measures have cut down on the number of last-minute simulations the company runs just before a product launch. So far, so good. But performance shortfalls could loom. "We're not seeing any technical barriers so far with clusters," he says. But Chrysler's crash simulations could benefit from more CPUs, and RISC machines may run into bottlenecks fetching data from memory as the chips get faster. "I'm hearing rumblings," Picklo says. "It's a theory--as these processors get faster, we may not be able to feed them."

John Picklo, high-performance computing manager for Chrysler. Photo by Bob Stefko.

Off-the-shelf chips help automaker Chrysler cut costs, Picklo says.
In the United States, supercomputers--the largest and most expensive systems developed to run big science, defense, financial-analysis, and engineering programs--have for the last decade or so been assembled from processors and interconnections originally designed for business computing. "Massively parallel" RISC-based systems and, more recently, giant clusters of off-the-shelf PCs have made supercomputing more affordable, particularly for private-sector use.

But the low-cost approach might be hitting a wall. The launch of the Earth Simulator--a Japanese supercomputer that eschews generic parts and blows away anything else in production--has the U.S. government and IT vendors racing to find new approaches. The performance of the Japanese machine, coupled with dissatisfaction with the efficiency and programmability of commodity systems, has sparked a new debate over whether the United States is on the right track with its supercomputing strategy. Many business users consider commodity clusters the only practical approach. But government users worry that the United States is falling behind. "Over the last decade, declining markets, inequitable trade practices, and limited [Defense Department] support have severely weakened the United States' industrial base for high-end supercomputing," says an April report to Congress by the National Security Agency, the Defense Advanced Research Projects Agency, the Energy Department's National Nuclear Security Agency, and NASA calling for more federal funding.

Systems under development, supported by federal money and designed for use in government computing labs, won't see daylight for several years. By the end of the decade, though, they could yield answers to some of the trickiest bottlenecks that constrain business users of high-performance systems. "It's an imperative that we maintain computational parity with the rest of the world," says Clark Masters, an executive VP at Sun Microsystems. "The government needs to encourage industry to take more risks and support more research."

A new class of supercomputers on the drawing boards at Cray, Hewlett-Packard, IBM, Silicon Graphics, and Sun will put less emphasis on theoretical peak processing speeds and focus instead on designs that yield real-world productivity--such as running applications at a higher percentage of a computer's top speed and simplifying the complex programming required by massively parallel machines. "These aren't evolutionary systems, but quite radically different from what they have today," says Jack Dongarra, a computer-science professor at the University of Tennessee, who maintains a list of the world's 500 fastest computers. There's good reason to watch what the government is up to in supercomputing: If a new approach succeeds, it could follow innovations such as the World Wide Web, intrusion detection, and massively parallel computing, which started as government projects, then permeated business. "There's always a trickle-down effect from high-performance computing," Dongarra says.

By the end of the month, Darpa is expected to grant a subgroup of those five vendors about $45 million to develop prototype systems for its High Productivity Computing Systems initiative. Last year, the Defense Department's research and development arm awarded Cray, HP, IBM, SGI, and Sun about $3 million each to conceptualize a generation of "high-productivity" supercomputers for national security and industrial applications that would be available by the end of the decade. The engineers are just starting to get their hands dirty.

Among the contenders: HP Lab is working on a system, code-named Godiva, that uses a new "processor-in-memory" architecture. This approach embeds a RISC chip in main memory, shortening the distance data needs to travel. Programming-language extensions and new compilers will let applications run in parallel, with less knowledge required by programmers about how to break down apps into relatively independent chunks, says Ed Turkel, a high-performance technical computing manager at HP. The company considers the technology applicable to large relational database problems faced by business users.

Previous
1 of 3
Next
Comment  | 
Print  | 
More Insights
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest, Dec. 9, 2014
Apps will make or break the tablet as a work device, but don't shortchange critical factors related to hardware, security, peripherals, and integration.
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
Join us for a roundup of the top stories on InformationWeek.com for the week of December 14, 2014. Be here for the show and for the incredible Friday Afternoon Conversation that runs beside the program.
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.