State Of Servers: Beyond The Processor

The value proposition is shifting from the processor to critical peripheral decisions.

Server technology is at a crossroads, as x86 performance enhancements threaten the high-end dominance of RISC/Itanium systems. Commodity servers, mostly in blade form factors, are surging toward ubiquity. The result is that the server value proposition is shifting from the processor to peripheral considerations, such as the ability to effectively implement virtualization, system manageability, and power and cooling efficiency.

Indeed, after acquisition cost, the ability to deploy, manage, and redeploy on-the-fly virtual processors (from physical or logical cores) is the single most important factor in server technology today. The uptake of these supporting technologies has been slowed to some degree by constrained IT budgets, set in late 2009, before the green shoots of recovery sprouted. Nonetheless, system consolidation and increased use of virtualization will continue to drive server updates, as IT organizations extract maximum usage from available resources. That's the top-level finding from the 579 business technology professionals who responded to our InformationWeek Analytics State of Server Technology Survey.

"The move to virtual servers for high availability and redundancy has become our goal," says one survey respondent. "Not only does this reduce the hardware footprint, but it also reduces power and cooling needs. We are also looking at solid state drives. This will improve efficiency, reduce power consumption, and increase storage availability."

IT organizations are still buying physical blade, rack, and tower servers, of course. However, in our survey, just 25% of respondents say they plan to increase their overall number of servers in 2010-2011; 31% say they will hold server count constant, and 44% plan to decrease their overall count.

Companies that are upgrading are taking a measured approach to the implementations. The availability of virtualization adds the nuance that boosting capacity is no longer just about buying more boxes.

The number of servers at E.ON U.S. continues to grow at 10% or more per year, says Ray Palazzo, an e-business engineer at the energy services company. But E.ON U.S. divides that count into physical servers, which are steadily declining in number, and virtual servers, which are increasing at a faster rate.

Meantime, with the continued consolidation on 64-bit platforms, the company needs more processing and memory on each physical server it buys. The company's average VMware host server now houses 72 GB of RAM and eight or more physical processor cores. And more widespread SAN usage has "virtually eliminated" the need for large internal storage capacity on servers, Palazzo says.

Budgetwise, our respondents display a "hold the line" mentality. Only 11% plan to ramp up server spending significantly in 2010. The majority, 65%, are looking at either flat or slightly increased server budgets.

Virtualization To The Fore

No server feature is in higher demand than virtualization. Cisco, Dell, Hewlett-Packard, and IBM offer customers both VMware and Microsoft's Hyper-V as virtual environment foundations, and also support Citrix's XenServer. The major virtualization platforms are also available on Sun servers, though following its acquisition of Sun, Oracle has positioned Oracle VM as its preferred solution.

Covenant Retirement Communities has a pretty typical setup. It's starting to use VMware's ESXi to virtualize one machine per blade, "to create more of a utility grid of blades, where the single application can be easily moved to another blade for maintenance or repair," says Robert Wall, director of IT operations. "We already do this with large host blades running 20 to 30 VMs, but applying ESXi to single-function blades is new for us," Wall says.

As Wall and his peers throughout the IT community are coming to realize, VMs proliferate like rabbits, increasing the requirement for robust server management tools to discover, provision, and redeploy logical processors. "You end up realizing that virtualization was supposed to solve problems, but now, for every physical resource I used to have--which was by itself a challenge to manage--I've got 10 to 40 times as many virtual resources," says Paul Prince, CTO of Dell's enterprise product group. "The management challenge becomes a hugely growing problem."

The fact that 65% of survey respondents have implemented or plan to implement virtualization for critical applications further underscores the fact that management of these environments will become increasingly critical. Yet an inherent difficulty in managing virtualized processors is that they're not discrete or isolated; you have to discover their location on the network as well as track usage and location dynamically. Virtualization management is thus elevated from a server issue to a network issue.

"It's a question of how we manage this new environment, where the network and the server are intimately related," says Paul Congdon, CTO of HP ProCurve Networking. "We're definitely pushing the envelope with the toolsets we have today. Visibility into what's going on is really difficult to obtain. Each application may have different requirements, and then, for example, storage and clustering-type applications can have different bandwidth and latency requirements. So now we have more constraints on how we try to meet the needs of those applications."

Many tools are available to let administrators provision servers, storage, and networking fabric in today's data center. But while the server vendors let customers use any tools they want, each vendor pushes its own solution.

HP bases its management and provisioning on technology from Opsware, a company founded by Marc Andreessen of Netscape fame and acquired by HP for $1.6 billion in 2007. Dell promotes Dell Management Console, a provisioning tool based on Symantec's Altiris, and it also lets customers integrate tools from ISVs under the rubric of Dell OpenManage. IBM automates the provisioning process with its Tivoli platform and various WebSphere offerings. Oracle offers Enterprise Manager Ops Center. Regardless of vendor or tool, the objective is the same: dynamic provisioning.

How The Vendors Stack Up

We asked survey respondents to rank the various vendors' value propositions. Dell tops our ranking of x86 server vendors, followed by HP, IBM, Oracle/Sun, and market newcomer Cisco. (Oracle, which finalized its acquisition of Sun in January 2010, continues to sell Sun's boxes, referring to them as "Oracle's Sun servers.") For non-x86 (RISC/Itanium) servers, the ranking order was IBM, HP, and Oracle/Sun.

In terms of particular servers, Dell's PowerEdge servers are the most popular, with HP's ProLiant second. However, looking just at the 269 respondents from companies with 1,000-plus employees, HP's ProLiant comes out on top, followed by Dell's Power-Edge, though the vendor rankings don't change: It's still Dell, HP, IBM, Oracle/Sun, and Cisco for x86; and IBM, HP, and Oracle/Sun for RISC/Itanium.

Predictably, use of Oracle/Sun Sparc Enterprise servers is highest among big companies--33% of companies with 1,000-plus employees use them, compared with 23% among all companies surveyed. Usage of IBM Power systems breaks out to exactly the same percentages. That finding dovetails with the expectation that RISC servers are the bailiwick of big businesses, while Dell's popularity skews toward smaller enterprises.

The bad news for server vendors is that once you get beyond the question of "RISC or x86?" there's not much top-level distinction in the minds of most buyers. That's why vendors focus on nuanced differentiators, such as power and cooling (see story, p. 24). It's also why HP, IBM, Dell, and Oracle/Sun tend to emphasize their broader data center strategies. Meantime, many buyers see differences among vendors in terms of sales and service, not technology.

New Competitors: Cisco And Oracle

It's the relative sameness of the dominant vendors' strategies that created an opportunity for Cisco, which moved into the server market in 2009 with its Unified Computing System (UCS), combining processing, storage, and networking into one physical unit. Cisco's UCS partners include EMC in storage, VMware for virtualization tools, NetApp for data management, and BMC for system administration and configuration. UCS servers are available as blade or rack systems.

Before Cisco's market entry, most server vendors were focused on physical integration, maintains Soni Jiandani, VP of Cisco's server access and virtualization technology group. "There wasn't really a lot of innovation until we started to deploy things like the Unified Fabric, the high-memory expander technology, and the ability to drive virtualization at the I/O level with our virtualized interface card," she says.

While this is typical marketing talk, Cisco's combination of best-of-breed technologies into a single server package was brilliant. It has forced competitors to emphasize how they, too, offer a broad range of tools, even if those tools aren't bundled into one box easily described in a marketing brochure.

But Cisco comes at the server market from its roots in networking, where it's accustomed to commanding premium prices (and profit margins). There will likely be both a learning curve and downward pricing pressure for Cisco, as it competes with conventional server vendors more adept at managing to thinner margins.

And Cisco has a whole lot of brand building to do in servers. Nearly one-quarter (23%) of our survey respondents weren't even aware of Cisco's server market entry, and an additional third (36%) say they aren't considering UCS a purchase option. Another third (32%) plan to learn more about UCS, but only 9% are considering buying it.

Stratogent, an IT hosting company, is taking UCS for a spin. "As an IT service provider, we like to test and possibly deploy new concepts, since customers look to us for guidance, and we would like to verify if at least some of the promises are true," says CEO Chetan Patwardhan.

The pluses Patwardhan sees in Cisco's all-in-one approach: virtualization, instant provisioning, and dynamic configuration of "every single function" between network and storage. "Given the architectural advances in chip and bus design," he says, "one can imagine simply daisy-chaining the required number of boxes."

The potential UCS minuses Patwardhan has heard from Stratogent's integration and operations people: "The inability to pick best-of-breed components and possible waste of purchased UCS resources, since not everything in a footprint could be optimally used," he says. Also, he adds, "no vendor that has similar solutions in any field has been able to guarantee seamless functionality between components, so there is no real cost/time savings in either integration or operations."

Oracle's server strategy requires some unraveling. That's understandable--having just completed its acquisition of Sun in January, Oracle is still a hardware newbie.

Oracle is positioning its UltraSparc servers at the high end as an application platform tuned to run Oracle's software. On the x86 front, it's focusing on 2P (and higher) x86 blades and racks.

Tuning the high-end boxes as Oracle-on-UltraSparc platforms makes sense, though this approach muddies the message that UltraSparc is also an ideal environment for non-Oracle software. On the x86 front, Oracle's plans are little different from those of the company's competitors--aiming more or less at the 2P market.

"Oracle didn't acquire Sun to pick up a few assets and have a little thing on the side," say John Fowler, Oracle's executive VP of systems and former executive VP of Sun's systems group and CTO for its software organization. "This is a centerpiece to the strategy to grow the profitability and revenue of the company. Oracle has more than 300,000 customers that it talks about publicly, and this is multiples of what Sun had. So the business strategy here is to take [Sun's server] products to a larger customer base, do the engineering innovation to make the products work together better with the Oracle stack, and therefore generate a better proposition to gain market share."

Fowler says major upgrades to UltraSparc are coming later this year, which will increase core count and performance. As for the company's Intel/AMD-based systems, he says: "What we're really focused on in the x86 platform is to build enterprise systems that are typically used in clusters."

I/O, Memory, And More

Server design is "the optimization of three different axes"--CPU speed, memory, and I/O performance, says Nigel Dessau, chief marketing officer at chipmaker AMD. "It's the way you balance those that really defines the performance of the server," he says.

Clock speed used to be the limiting factor in server performance, Dessau notes, but today it's I/O, whether you're referring to processor-to-memory communications or external data transfers. Processor-to-memory communication has been facilitated by AMD's integrated memory controller in Opteron, and Intel's Quick Path Interconnect channel in Xeon.

An additional consideration is that some server vendors implement their own enhancements to facilitate "scaled memory." Here, for example, IBM has designed and fabricated an ASIC that lets its new IBM eX5 servers access an extended amount of memory without additional latency. (Cisco has developed its own ASIC to scale the memory in its UCS.)

"We had the first x86 server that was scalable to 16 processors," says Rod Adkins, senior VP of IBM's systems and technology group. "We were the first to do hot-swap memory. With eX5, we're now taking memory scalability to a new dimension." In terms of expandability, this feature ups the maximum system memory from 256 GB for previous-generation servers to 1,536 GB for the eX5 line--a sixfold increase.

While that's more capacity than most customers spec for their boxes, the sweet spot for installed memory has risen, driven by the increased use of virtualization and the rising requirements and number of apps executed on each system. HP has seen customers move from a small number of dual in-line memory modules--four or eight DIMMs--to eight to 16 DIMMs, says Gary Thome, chief architect of HP's infrastructure software and blades group. "That's been a transition driven largely by the multicores and the virtualization going along with it," he says.

HP's BladeCenter 490c is an example of the new generation of servers that fit more memory in small form factors, Thome says. The system can support up to 18 DIMMs and 288 GB of RAM on a half-height blade, and 16 such blades can be installed in a single enclosure.

In our survey, 65% of respondents report buying their x86 servers with greater than 8 GB of memory, and 27% are opting for configurations greater than 32 GB.

Networking speeds, too, are on the rise. While 1-Gb Ethernet remains the norm, both vendors and customers are migrating or planning to migrate to 10-Gb Ethernet on the motherboard. (Eleven percent of survey respondents use 10-Gb Ethernet, and 47% of those using 1-Gb Ethernet are considering migrating to 10-Gb Ethernet.) The management task will become additionally challenging as networks become more stressed by mobile workers, logging on and off at will while demanding full access to corporate resources.

For server vendors, the key will be fielding products with the right combination of memory, on-motherboard networking, and power and cooling--at the right price--to serve what are effectively commodity requirements, all while not appearing to be commodities.

Editor's Choice
John Edwards, Technology Journalist & Author
Carrie Pallardy, Contributing Reporter
Alan Brill, Senior Managing Director, Cyber Risk, Kroll
John Bennett, Global Head of Government Affairs, Cyber Risk, Kroll
Sponsored by Lookout, Sundaram Lakshmanan, Chief Technology Officer
Brandon Taylor, Digital Editorial Program Manager
Jessica Davis, Senior Editor
Richard Pallardy, Freelance Writer
Sponsored by Lookout, Sundaram Lakshmanan, Chief Technology Officer
Sara Peters, Editor-in-Chief, InformationWeek / Network Computing