Business & Finance
News
7/14/2006
09:45 AM
Connect Directly
RSS
E-Mail
50%
50%

In Depth: Supercomputers Get A Speed Boost From Specialized Chips

Computer engineers are increasingly using hardware accelerators to break through the limitations of all-purpose microprocessors.

Potential Problems
Faster transistors produce more heat, making it prohibitively expensive to cool them and in some cases limiting performance gains. And as wires get thinner, they offer more electrical resistance, making it harder to communicate across a chip's surface in a single clock cycle. In response, the industry has turned to multicore chips, which address those problems by essentially adding another CPU to the chip without ramping up clock speed. That's restored performance gains and touched off a new battle between Intel and AMD.

But there are potential drawbacks with multicore chips. Since more processors contend for memory--two cores on a chip today, four by next year, and perhaps hundreds within a decade--each has to wait longer for data from memory, causing bottlenecks and diminished performance. What's more, the multicore approach turns PCs and servers into parallel computers, which are notoriously hard to program. And many programs aren't optimized for multicore chips.

As Intel's and AMD's road maps call for more and more cores on a chip, those challenges will only be aggravated. "Multicore chips aren't a panacea," Turek says. Microsoft has cited the eventual need to program PCs like parallel computers as one reason it's investing millions in high-performance computing. If the industry can find ways to use computational accelerators to add performance to systems without adding heat, parallelism, or power requirements, users could head off some of those problems.

The approach isn't for everyone. Specialized chips can speed up scientific apps "an awful lot," says Luiz Barroso, a distinguished engineer at Google. But mass-market chips have closed the parallel processing gap with the advent of multicore technology, and they save lots of time on programming. "It ends up being a better solution overall," he says. "It's very important for us to innovate very fast." Google has watched the development of specialized processors and FPGAs. But the approaches work better for apps with predictable patterns and lots of decimal math than for Web searches.

Tokyo Tech's supercomputer is the most prominent hardware-accelerated system that's meant for a range of workloads. IBM is applying the technique to a new line of business servers coming this summer that use its new Cell processor, which uses special silicon for floating-point math and powers Sony's upcoming PlayStation 3 video game console. Sun is working with a research group at the University of California at Berkeley to prototype systems using hundreds of thousands of FPGAs to simulate the behavior of the Internet and design more resilient systems, says Greg Papadopoulos, executive VP of R&D. And a system called MD-Grape 3 at the physics and genomics research Riken Institute in Japan that complements Intel Xeon chips with special accelerators could reach a petaflop for a specialized application in molecular dynamics as soon as next year.

Among the chip companies, AMD last month gave what's been the strongest vote of support to hardware acceleration when it began licensing its Coherent Hypertransport technology, which provides an interface between chips in an Opteron system. The licensing program, called Torrenza, will allow other vendors to create accelerators that can share memory with Opteron chips. Intel's road map doesn't call for the ability to link co-processors to its chips, a spokesman says, and it plans to stick with the multicore approach. Intel boards in the '80s and early '90s contained an extra socket for co-processors, but the company discontinued that approach when it shipped the Pentium chip with advances in CPU technology.

Cray plans to use the Torrenza approach in future systems that combine the best aspects of its XD1 systems, which the company plans to phase out, and its XT3 supercomputer, the centerpiece of a $200 million contract signed last month with Oak Ridge National Lab in Tennessee to build a system that could reach a petaflop by 2008. Cray is designing a co-processor code-named Scorpio for a system that's contending for a $250 million grant, expected to be awarded this month, from the Defense Advanced Research Projects Agency.

Previous
2 of 3
Next
Comment  | 
Print  | 
More Insights
The Business of Going Digital
The Business of Going Digital
Digital business isn't about changing code; it's about changing what legacy sales, distribution, customer service, and product groups do in the new digital age. It's about bringing big data analytics, mobile, social, marketing automation, cloud computing, and the app economy together to launch new products and services. We're seeing new titles in this digital revolution, new responsibilities, new business models, and major shifts in technology spending.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Tech Digest - July 22, 2014
Sophisticated attacks demand real-time risk management and continuous monitoring. Here's how federal agencies are meeting that challenge.
Flash Poll
Video
Slideshows
Twitter Feed
InformationWeek Radio
Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.