Although the language may ease the programming job, it will likely slow system performance if implemented in software alone. Thus, chip makers are exploring ways to add instructions to track memory read-and-write operations in order to avoid collisions in complex transactions.
"If this is going to work, the machinery for atomic transactions needs to be built as a fundamental piece of the system," said Levin.
"We know of very large hardware companies interested in this work," Birrell added.
Microsoft hopes to aid the hardware effort by trying out different transactional-memory structures on simulated processors. The company, along with Xilinx Inc., is funding a new generation of the Berkeley Emulation Engine used in the university's Research Accelerator for Multiple Processors project. The emulator, which will use a Virtex 5 FPGA, will be built by a Taiwanese board maker and is due to become available by the end of the year.
At that point, Microsoft plans to try out a variety of potential memory architectures to evaluate how well they handle atomic transactions created with its new language.
"When we have some results, we will certainly work with AMD, Intel and anyone in this space to make sure we have a systems solution. It's a multi- year project," said Levin.
The Berkeley emulator could also be used to evaluate parallelism in graphics architectures. Noted graphics researcher Kurt Akeley, who is returning to Silicon Valley from a posting at Microsoft Research's Beijing office, may take up that work, Levin said.
Keeping it simple
However, Levin was sour on the idea of mixing large numbers of different kinds of cores on a processor at this stage. AMD plans to use graphics, X86 and other cores in its Fusion processors starting in 2009, although it has not specified just how many cores it will integrate (see story, page 18.)
"We don't even know how to solve the problem of many homogeneous cores yet, so we should not try to bite off the problem of heterogeneous cores," Levin said.
Levin said he did graduate work in the 1970s on parallel programming using 16 different models of PDP-11 systems.
"Not all of them used the same instruction sets, so it was interesting trying to write a scheduler. We still have this problem today," said Levin, who eventually rose to manage a West Coast R&D lab for Digital Equipment Corp. "In many ways, we haven't advanced a lot in parallel software since the 1980s in terms of the paradigms we offer the programmer, and overall they have not worked terribly well."
Microsoft has a number of other projects in parallel programming spanning work at labs in Cambridge, Mass.; Bengaluru, India; and Beijing. Some of them focus on how to validate and test complex parallel programs.
"We've been doing a lot of fundamental research in software analysis in concurrent programs," Rashid said. "One of the problems of concurrency is that it's even harder to understand what you've done when you are finished.
That makes programs more difficult to test."