Intel's new E5-2600 chips set performance records. Here's how they do it.
Aside from that other little announcement out of Cupertino, one of the worst-kept secrets in the tech world was the impending release of Intel's successor to its vaunted Xeon 5600 series processors--the chip that's currently the power plant for the cloud. The new E5's Sandy Bridge micro architecture is well known as it debuted on Intel's desktop and laptop lines over a year ago, and its successor, the Ivy Bridge generation, will show up on consumer devices in a couple months. Formerly known as the Sandy Bridge-EP, the new E5-2600 series chips further cement Intel's preeminence atop the processor landscape. Here are a few reasons why.
1. More Power Scotty. Not only does the E5-series increase the top Xeon core count by a third, from 6 to 8 per chip, it almost doubles the L3 cache from 12 MB to 20 MB at only a slight loss in clock speed. The 8-core part tops out at 2.9 GHz. Intel claims these combine to yield an 80% performance improvement and they have a slew of benchmarks to back up the claim. While we hold a skepticism toward benchmarks that would make Mark Twain proud, this is an impressive list with 13 records for two-socket systems and two overall records for system-level energy efficiency (workload/watt).
2. Keeps Well Ahead Of AMD. Although AMD introduced its 8-core Bulldozer line (the Operton FX-8/6/4 series) last fall, the benchmarks show why Intel was in no hurry to rush out its latest generation. Not only does Intel's hyper-threaded design mean that the typical virtualized workload has twice as many instruction threads running (16 versus 8), but Intel's micro architecture still seems to get more work done per clock cycle. Early comparative benchmarks show an 8-core E5 with about 30% to 50% better virtualization and database performance than an 8-core Bulldozer system.
3. Better Power Management. The E5-series has perhaps the most sophisticated on-chip power management and turbo-mode of any CPU to date. Heretofore, a processor's turbo-mode worked by looking at the instantaneous workload on each core, and much like the old GM V8-6-4 engine, shutting down unused cores while boosting the clock rate on those still running. However the speed boost was limited so as to keep within the chip's maximum thermal die power (TDP). Power consumed is directly proportional to clock speed. In contrast, the E5 can exceed TDP for up to 10 seconds, recognizing that the overall thermal mass of the chip substrate and heat sink will keep active components from immediately reaching critical temperatures. It's kind of like turning on only the hot water when first filling a bathtub; if you turn it off quick enough, you'll keep from exceeding your desired temperature. In sum, Intel claims its power management features improve performance per watt by more than 50% and allow a maximum boost frequency of 900 MHz, up from the 266 MHz maximum in the previous Xeon 5690 chips.
4. Still It's Plenty Hot. At 135W per device, the 8-core E5 sets a new standard in absolute power consumption, topping the previous 6-core Xeons and top-end Bulldozer-based Opterons by 5W and 10W respectively. Add to this support for 24 1600-MHz power-hungry DIMMs (up from 18 at 1333 MHz) and you're looking at some hot systems. An Australian researcher who tested the new processors in a 2U system said he couldn't imagine how you could feasibly cool a rack full of these in high-density blade servers. Look for server manufacturers to mitigate the single-system power demands by using Intel's new Node Manager power management technology, which can control power usage across systems and enforce rack-level energy and thermal policies.
5. It's Not Just About CPU Horsepower. Unless you're crunching data for the Large Hadron Collider, you're probably less concerned about SPECMarks and raw processor throughput and more interested in how many VMs you can host on a server. And if you are running CPU intensive simulations, you may be more interested in the E7 chip line, which was also introduced this week. Designed to support even more memory and systems with as many as 256 sockets, the E7 can be used to create mainframe-like systems and might send shutters down the spine of current Itanium users. For the rest of us, the goal is running dozens of enterprise applications on a single box, and that means keeping those CPUs fed with data; and that means a lot of network I/O. Here, the E5-series doesn't disappoint.
It's the first server processor to integrate a PCI Express 3.0 controller directly onto the die (an advance that's similar to last-generation's Nehalem putting the DRAM memory controller on-chip). Intel claims the onboard PCI controller reduces latency by 30% and can triple overall throughput. The chip also includes improved I/O technology that allows moving data directly between the processor cache and PCI interface without hitting system memory. In conjunction with the E series rollout, Intel introduced a single-chip LAN on motherboard (LOM) 10-Gbps Ethernet controller using a low-cost, low-power copper interconnect (10GBASE-T). This means your next server will be able to saturate a 10 Gigabit link; an interface that will come for free thanks to the LOM silicon.
As a package, Intel's latest Xeon chips sport an array of features that are clearly designed for the era of massive virtualization, 10-Gigabit Ethernet consolidation, and sophisticated power management. If you're building a private cloud, where a rack full of servers handles workloads that just a few years ago took an entire data center, the Xeon E5-series are the perfect foundation. If you're just starting down the virtualization path and private cloud is still just a buzzword, these new chips may be overkill.
It's time to get going on data center automation. The cloud requires automation, and it'll free resources for other priorities. Download InformationWeek's Data Center Automation special supplement now. (Free registration required.)
Server Market SplitsvilleJust because the server market's in the doldrums doesn't mean innovation has ceased. Far from it -- server technology is enjoying the biggest renaissance since the dawn of x86 systems. But the primary driver is now service providers, not enterprises.
. We've got a management crisis right now, and we've also got an engagement crisis. Could the two be linked? Tune in for the next installment of IT Life Radio, Wednesday May 20th at 3PM ET to find out.