Intel Plans For Graphic Computing Without A GPU

Intel thinks ever more powerful CPUs can handle graphics processing, eliminating the need to use GPUs from companies like Nvidia and ATI.

Antone Gonsalves, Contributor

June 12, 2008

3 Min Read

If Intel has its way, then tomorrow's computers will not need a graphics processor. Instead, the machine's CPUs will handle the job of rendering eye-popping graphics.

On Wednesday, Intel used its annual labs open house, held this year at the Computer History Museum in Mountain View, Calif., to push a rendering technique known as "ray tracing," as the eventual replacement for raster graphics, the rectangular grid of color pixels that comprise computer graphics today.

For high-performance graphics rendering in PC games or applications used by creative professionals, today's computers use a separate graphics processing unit, or GPU, from companies like Nvidia and ATI, which is owned by Intel rival Advanced Micro Devices. Intel does not make separate graphics cards today. Instead, the chipmaker supports cards from Nvidia and ATI.

Using a separate GPU means the computer's general-purpose CPU can perform other important tasks, so PC performance doesn't suffer. But the way Intel sees it this approach has several flaws. "We believe that new architectures will deliver vastly better visual experiences," Justin Rattner, Intel chief technology officer, said.

First, a raster graphics image, also called a bitmap, can not produce images of the same quality as ray tracing, which tries to simulate the path that light rays take as they bounce around within the world. The latter is capable of simulating a wide variety of optical effects, such as reflections in objects, depth, and refraction, which is when light passes through transparent objects. Copying this phenomenon makes it possible to display objects as they would look underwater or through a bubble.

While ray tracing requires more processing power, that's becoming less of a problem with the growing use of multi-core processors. Also, because the rendering can be done by CPUs, developers can use the same tools and languages they use to build other applications, according to Intel researchers. "It's more flexible, and flexibility is very important in development," Intel researcher Alain Dominguez said.

In using today's GPUs, developers have to learn how to use separate software development kits, compilers, and languages, Dominguez said.

While a lot of the applications using ray tracing can be written for a C/C++ compiler used by many software developers, a separate "shading language" and "shader compiler" is needed to display some special effects, Intel researcher Daniel Pohl said. However, the language is close to C/C++, so the learning curve is small for many developers.

In addition, ray tracing is scalable, which means it can scale to however many cores are available. Using 16 cores, for example, would render graphics much faster than one core.

Nevertheless, Intel researchers are still working on a number of problems with using ray tracing. First is the high demand for computing power. Also, the technique is best suited for application where the image is rendered slowly, such as in still images and film and television special effects. Therefore, it'll be a while before the technique is ready for today's fast-paced PC games.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights