Intel's latest chip architecture addresses some major impediments to building high-performing, multi-CPU systems.

Art Wittmann, Art Wittmann is a freelance journalist

March 26, 2009

3 Min Read

Chip junkies-silicon, not potato-have been hearing about Nehalem for two years. It's Intel's latest microarchitecture, and it's one of a number of developments causing servers to morph toward blades. The recurring themes in the pressure on server architectures are bandwidth and efficiency. These changes are sweeping enough that they should have you rethinking what you want out of servers and how you deploy applications going forward.

Over the past two weeks, Cisco made waves with its unified computing announcements, which call for a single 10-Gb pipe into the server. This week it's Intel's turn. Intel has gotten into a rhythm of alternating the announcement of smaller transistor gates--some as small as 32 nanometers--and new architectures. Recent architectural announcements have been all about getting more than one processor core onto a chip, an important design change by any measure. As Intel has been packing more cores onto the chip, it hasn't been changing other system features. Most notably, it has kept its "front side bus" architecture longer than it should have.

In its existing architecture, an external memory controller acts as a bridge between main memory and processor cores. For New Yorkers and San Franciscans, the term "bridge" is especially relevant. Just as traffic over the Bay Bridge or through the Holland Tunnel is always backed up, so it is with Intel's existing architecture. This is one area where Advanced Micro Devices' Opteron had a superior design over Intel's previous Xeons. Like the Opteron, Intel has moved the memory controller onto the chip and added a high-speed interconnect by which it can communicate with other processor chips or I/O controllers.

The result is a lot more bandwidth overall, and a far superior design for multisocket servers. Expect to see servers routinely packing four quad-core chips and a lot of memory.

Intel has gotten very practical about its chip enhancements. The ones I've mentioned here and some I haven't mentioned are intended to make Intel's chips a better platform for virtualization. Intel also has added some instructions that will speed up HTML and XML processing. Others in the future will enable faster encryption on the chip.

See our virtualization report:
Unlocking Virtualization: Facing IT, Business Realities

Informationweek Analytics >> Register to see all reports <<

A third factor that will change server design is the use of solid-state drives (SSDs). While application data and virtual machine images belong on SANs, there's still a need for local storage on servers. Spinning disk drives, which are slow and power hungry, are now giving way to SSDs. While SSDs are still more expensive than spinning disks on a price-per-byte basis, that gap is closing. The result will be higher-performing servers with no moving parts (except for the fans--these servers will run hot).

Cisco got its news out early by referring to future Nehalem chips. Now expect Dell, IBM, Hewlett-Packard, and others to follow. If there were any doubts that virtualization will change the face of the data center, Intel is doing its best to assuage them.

Art Wittmann is director of InformationWeek Analytics. Write to him at [email protected].

To find out more about Art Wittmann, please visit his page.

Register to see all reports at InformationWeekAnalytics.com.

About the Author(s)

Art Wittmann

Art Wittmann is a freelance journalist

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights