Behind The Curtain At Intel Research

The chip vendor rolls out research prototypes in chips, mobile computing, and tera-scale computing.

Antone Gonsalves, Contributor

June 21, 2007

8 Min Read
InformationWeek logo in a gray background | InformationWeek

Three words sum up the dozens of research projects at the core of Intel's future microprocessors and chipsets: power, efficiency, and mobility.

The chipmaker on Wednesday showed off researchers and prototypes at the company's Santa Clara, Calif., headquarters to try to dazzle reporters and analysts with what Intel hopes will be the next big thing.

And by big, Intel means teraflop chip performance, or the ability to crunch a trillion arithmetic calculations per second as the baseline. Among the technologies on display was what Intel called the "first tera-scale silicon prototype."

(click image for larger view)A prototype of Intel's 80-core chips.view the Intel Research image gallery

The 80-core processor, 13 millimeters by 22 millimeters, was equivalent in power to a teraflop supercomputer that 10 years ago would have filed a room 40 feet by 10 feet, Paolo Aseron, a hardware engineer at Intel's microprocessor lab in Hillsboro, Ore., said. "This is a proof of concept, numbers-crunching monster."

Built using a 65-nanometer manufacturing process, each core has 5-Kbytes of cache and two floating-point units. Compared to Intel's quad-core processors today, the prototype has 40 times the processing power, Aseron said.

Tera-scale computing is the future for Intel chips and platforms. The company currently has more than 100 R&D projects worldwide dedicated to addressing hardware and software challenges associated with systems that would be based on processors with dozens of cores.

Justin Rattner, chief technology officer for Intel, told attendees the company's first tera-scale processor, codenamed Larrabee, would be capable of processing "well in excess" of a teraflop of data. The processor is set for release in 2010, but could show up in 2009, he said.

To help software developers deal with tera-scale systems, Intel has developed a programming model called Ct, which extends the programming languages C and C++. In essence, the model deals with the complexity of parallelization, which is spreading the workload of a task among multiple processors to produce faster results.

Ct makes it possible for developers to program as if they are writing applications for one core, Mohan Rajagopalan, a research scientist at Intel's Santa Clara lab, said. Code is optimized for multiple cores when it is compiled, and during runtime.

Intel plans to release a preview of Ct to the open-source community in the near future, Rajagopalan said. "We're still working out the legal issues in making the whole project open source."

In demonstrating possible uses for tera-scale computing, Intel chose video editing and computer game development. The first involved the use of software smart enough to detect patterns within a 90-minute video of a professional soccer game, and extract some of the highlights.

To do that, Intel researchers had to create a model that enables the computer to learn to recognize important plays, much like a spam filter can learn to separate spam from legitimate e-mail. "We can train the computer to detect the highlights based on the model," Xiaofeng Tong, researcher at the Intel China Research Center in Beijing, said.

The demonstration involved highlight-extracting software running on a computer powered by an Intel dual-core chip. The next step, which wasn't demonstrated, would add activity analysis, so the system would know the difference between a foul and a goal. Such a system would need an eight-core processor capable of 100 gigaflops. In order to perform action analysis on every play the Intel model would have to run on a 64-core processor, Tong said.

In computer game production, Daniel Pohl, a German developer of videogame technology, demonstrated the use of a "ray-tracing algorithm" that enables the rendering into a 2-D format animation built by artists using 3-D tools. This is necessary because computer screens only show 2-D images.

Today, most artist-created images are converted using a rasterization method that takes a 3-D image described in a vector graphics format and converts it into pixels for output on a computer screen. Ray tracing produces a higher quality image because the algorithms are better at rendering light and shadow, Pohl said. "There are no algorithms that do it right in rasterization."

In showing the higher quality images, Pohl used a version of the Raven Software computer game "Quake 4" that he had rewritten. Of course, more detail requires more horsepower, so the game ran on four Intel quad-core processors.

With tera-scale computing comes the need for power efficiency. For some time, Intel has developed chips that are more powerful, but consume the same amount of energy as previous versions.

To help continue that trend in tera-scale computing, Intel is developing "adaptive circuits" within a processor that would determine the minimum amount of performance required for a task. "We have a brain in the chip," Bryan Casper, principal engineer for Intel's Circuit Research Lab in Hillsboro, said. All power associated with a task is turned down to a "just-enough" level. A prototype of the technology was demonstrated in a PCI Express card with a chip that consumed one-tenth the power of a card with today's chip technology, or 2.7 milliwatts versus 20 to 30 milliwatts. Reducing power consumption is critical, given that using today's technology to power a PCI Express card with a bandwidth of a terabit per second would require 100 watts of energy, Casper said.

Outside of supercomputing, Intel is also looking for greater energy efficiency in mobile devices in order to extend battery life. One area where it is looking to cut power consumption is in wireless communications.

Researchers showed a prototype of a Wi-Fi card with firmware that automatically turned off the power when the card was not in use. The technology also knew when to power up to receive or transmit data packets. Such cards use from 50% to 70% less power than standard wireless cards, researchers said.

Intel is also developing technology for server chipsets that would work in conjunction with products from power supply, storage and software management vendors. The server chipsets in conjunction with third-party technology would enable users to cap power use of individual servers, and also know the thermal output of servers so they could be disbursed to avoid "hot spots" that require additional cooling, Milan Milenkovic, principal engineer at Intel's Systems Technology Lab in Hillsboro, said.

Computing with smaller devices, called ultra-mobile PCs, is an area that Intel is betting heavily on. One way to shrink devices without compromising computing power is to consolidate components.

One area where that is being done by Intel is in the number of antennas needed to support multiple wireless standards. A device, for example, that supported Wi-Fi, WiMax, a 3G cellular network, and Bluetooth would require eight antennae, Ross Hodgin, technology marketing engineer, for Intel, said.

To consolidate as many antennae as possible into one, Intel is developing a switching device that would change the antenna's radio pattern depending on which wireless standard was needed. The technology would be made available to device manufacturers. "From the end user perspective, they get flexibility, a smaller form factor, and reduced cost," Hodgin said.

Another consolidation effort is in placing multiple wireless standards on one card, as opposed to having separate cards for each. To make that possible, Intel is developing technology called "media access control," which would ensure that the device isn't trying to transmit and receive multiple wireless standards at the same time, Mathys Walma, senior engineer at Intel's Hillsboro lab, said.

Intel is also looking to build wireless cards with components that are configurable, so the same components could be used to support multiple wireless standards, which would mean smaller cards.

One area of exploratory research that interests Intel is in "sensing and modeling everyday behavior in real-world environments," Beverly Harrison, senior research scientist, at Intel's Seattle lab, said. What that means is the ability of a computer to perform a task based on cues it receives from people, such as the waving of a hand, the nod of the head, or someday, even a facial expression.

Harrison demonstrated a pager-size device that could be hooked on a belt, and sense whether the wearer was bicycling, running, standing, using a Stairmaster exercise machine, or walking. In the demo, the device's choice was displayed on a screen in the form of percentages, since power walking, for example, could be considered half walking and half running.

The idea behind the research is to make computing as unobtrusive as possible in everyday life. "Maybe it's not quite invisible, but it's less observable," Harrison said.

Rattner told attendees that Intel has had nearly 1,000 researchers working in 15 locations worldwide for the last two years. During that time, the company has put in place a process called "path finding" that puts researchers and product development people together for as long a year and a half to find the best use for newly developed technology in products.

Before, the lack of communication between researchers and product developers was "a bridge too far," Rattner said. Today, there're about 350 people working on such projects, 60% product developers and 40% researchers. "They're beginning to view it as a key improvement," he said.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights