PC chips such as Intel's Pentium, and most server processors on the market, use an approach called instruction-level parallelism, which yields fast response times to tasks like running a program. But as computing becomes more distributed, businesses face a growing need to run multiple tasks requested by thousands of users as fast as possible in aggregate. One approach to the challenge is putting multiple computing "cores" on a piece of silicon--nearly doubling performance, but also using up lots of transistors.
Intel said at its Developer Forum 2003 that a new 32-bit Xeon chip for servers, code-named Tulsa and due in two to three years, will be its first with two cores. A dual-core Itanium 2 chip, code-named Montecito and due in 2005, will be the company's first dual-core 64-bit and its first built with a 90-nanometer process. A chip code-named Tanglewood, due after 2005, could include as many as 16 cores. The chip will also likely include the ability to simultaneously run more than one "thread," or set of instructions, Illuminata analyst Gordon Haff says. It's being developed with former Digital Equipment Corp. engineers and technology Intel acquired from Compaq in 1998.
Thread-level parallelism helps keep chips busier processing data instead of waiting for it to come through. It's also a more efficient use of on-chip real estate than building multiple cores. "The movement is toward more thread-friendly programming," Haff says. Intel already offers a technology called Hyper-Threading in its Pentium and Xeon chips.
Sun Microsystems and IBM are also emphasizing multithreading in upcoming designs. IBM's Power5 chip, due next year, will feature two cores, each capable of running two threads. Sun's plans call for a low-power dual-core chip next year, and an eight-core, multithreaded chip code-named Niagara by early 2006.
Greg Papadopoulos, Sun's chief technology officer, says pushing this kind of "throughput computing" is one of his top three priorities at the company, along with developing a software stack and set of management technologies called N1. "There's a big Internet shift away from designing processors for systems that would support a few hundred users, toward making networking software run more quickly when there's lots of thread-level parallelism," he says. "Hardware systems take a long time to respond to software systems."