Hardware & Infrastructure
News
6/20/2003
11:20 AM
Connect Directly
RSS
E-Mail
50%
50%

Power Play

A Japanese supercomputer ignites debate over how the government should seed research for the fastest machines--and whether any of it helps businesses

Sun's proposal applies concepts from its "throughput computing" plan to build Sparc chips capable of executing multiple instruction threads in parallel, and its Orion project to develop a pretested software stack that can improve system efficiency and programming. "We think we have some aces up our sleeve," says Shahin Kahn, VP of high-performance technical computing. And SGI is developing an architecture that shares memory among its processors like a business-oriented multiprocessing system, but also handles instructions designed for clusters, in which processors don't share memory.

Separately, IBM is researching a "virtual vector architecture" with two national labs that could let government users and business customers in auto manufacturing, aerospace, and oil exploration run classic vector supercomputing applications on its mainstream Power processors. Next year, IBM will start discussing a business computer based on chip technology being developed for "ASCI Purple," a system it's assembling for Lawrence Livermore National Laboratory.

That's just the beginning. The White House; half a dozen federal agencies, including the NSA and National Science Foundation; and nearly 20 of the nation's government computer labs are involved in a flurry of debate, procurements, and proposals for hundreds of millions of dollars in new funding to reinvigorate a high-performance computing industry that's seen as moribund (see story, p. 36). "Government and industry have focused on developing and manufacturing high-end supercomputers based upon commodity components," says the April NSA report to Congress. That's made systems more affordable, but critical national-security apps "are neither met nor addressed by the commercial sector." The report proposes as much as $390 million a year in new funding.

The Earth Simulator announcement in April of last year threw a bomb on the fire. The project, a $350 million, 5,120-processor supercomputer funded by the Japanese government, built by NEC Corp., and stationed in Yokohama, Japan, was designed for a single purpose: to construct a "virtual earth" that models the planet's entire climate, including global temperature, weather, and earthquake patterns. Operating at a top speed of 35.8 trillion floating point operations per second, the system at its launch dwarfed the next-fastest computer, an IBM machine running at 7.2 teraflops. It was as powerful as the next 12 supercomputers in the world combined. "That was a shot across the bow and a wake-up call to Washington," says Mike Vildibill, director of government programs at Sun.

Worse, the Earth Simulator--housed in a special building replete with earthquake shock absorbers and an anti-lightning system--can sustain 65% of its theoretical peak speed over time, compared with perhaps 5% sustained performance for systems built under the Accelerated Strategic Computing Initiative, a 7-year-old Energy Department program to develop systems capable of safeguarding the nation's nuclear weapons stockpile. That's partly because the Earth Simulator uses parallel vector processing, an expensive, specialized approach that moves data rapidly through a system--an approach that's been out of favor in the United States since the late 1980s.

Responding to the Earth Simulator isn't just "a national pride issue," says Dan Reed, director of the National Center for Supercomputing Applications at the University of Illinois and appointee to the President's Information Technology Advisory Council. It's needed to tackle emerging computing problems in areas such as cosmology, artificial life, nanotechnology, and materials science.

How did the United States get to this point? As defense budgets waned in the '90s, government shifted money from underwriting vector supercomputers toward high-performance systems built using widely available commercial parts. Supercomputers built with RISC and Intel chips are cheaper to buy and maintain, but they can't simultaneously calculate long lists of numbers, called vectors, or shuttle them around in those big chunks.

What clusters of PCs do well is handle calculations broken into pieces that run on individual processors. That's been a boon for applications such as seismic data processing, gene sequencing, and some engineering simulations. Amerada Hess, the $11.9 billion-a-year oil and gas exploration company, processes its seismic imaging data on hundreds of cheap PC servers. All IT in the oil business is considered overhead, says Vic Forsyth, manager of geophysical and exploration systems. "Every penny you put into a computer is one less penny you put into the ground," he says.

But for other uses--modeling proteins to understand how drugs interact with the body, solving financial equations, and running scientific programs with long decimal strings--clusters don't always make the grade. Also, they're hard to program, because scientists have to break down their code for systems in which all the processors can't see all the memory.

On the government's agenda: constructing supercomputers that keep the advantages of off-the-shelf parts, with fewer of the headaches. Dongarra's latest biannual Top 500 list--to be unveiled at this week's International Supercomputer Conference in Heidelberg, Germany--includes a new No. 2: Two HP Alpha systems at Los Alamos National Labs running at 7.7 teraflops each, which have been combined into one computer.

Previous
2 of 3
Next
Comment  | 
Print  | 
More Insights
The Business of Going Digital
The Business of Going Digital
Digital business isn't about changing code; it's about changing what legacy sales, distribution, customer service, and product groups do in the new digital age. It's about bringing big data analytics, mobile, social, marketing automation, cloud computing, and the app economy together to launch new products and services. We're seeing new titles in this digital revolution, new responsibilities, new business models, and major shifts in technology spending.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - July 22, 2014
Sophisticated attacks demand real-time risk management and continuous monitoring. Here's how federal agencies are meeting that challenge.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Archived InformationWeek Radio
A UBM Tech Radio episode on the changing economics of Flash storage used in data tiering -- sponsored by Dell.
Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.