But could this architecture be getting another chance to win over enterprise data center architects?
Just maybe. Much has happened since blades first entered IT pros' collective consciousness--not least, the advent of x86 server virtualization. Meanwhile, vendors have addressed many of the problems that plagued earlier systems, with help from the latest Intel and AMD processor platforms, in two- and four-socket configurations, and extended memory architectures. Now with the recent entrance of networking giant Cisco, activity in the blade market is really heating up, and enterprise IT managers have new reasons to check out the form factor.
"Blade server technology really benefited from complementary technologies such as virtualization," says John Marchese, VP of sales engineering at integrator BlueWater Advanced Technology Experts. "Until recently, the actual blade architecture has remained somewhat constant--CPU, memory, NIC, HBA. But technologies such as Fibre Channel over Ethernet, blade server management, converged adapters, and virtual I/O are revolutionizing blade architectures."
These unified systems comprise the blades themselves; the chassis, which houses the blades and provides integrated power and connections as well as interchassis links; communications blades; power and cooling, including the innovative new liquid-cooled designs we discuss in our Practical Analysis column; and internal or external storage.
In August, InformationWeek Analytics surveyed 432 business technology professionals on their data center convergence plans. When asked about their organizations' interest in new, single-manufacturer blade systems, such as those from Cisco, HP, and IBM, to support a private cloud strategy, just 13% expressed no interest. Full results of that survey will be out later this month, but one interesting data point for high-end server vendors is that just 3% say they're not at all interested in a unified storage and data network.
Most of those not moving now are waiting for money to free up.
Speaking of blade vendors, HP held the No. 1 position in the blade server market in the first quarter of 2010, according to IDC, with a 56% revenue share. IBM had 24%. Other vendors that should be considered for any blade server system short list include Dell, Oracle (via its acquisition of Sun), and, as of last year with the introduction of its Unified Computing System, Cisco.
To determine blades' place in your enterprise, it's important to first look at what's driving server purchasing. In our March InformationWeek Analytics 2010 State of Server Technology Survey, the majority of respondents--60%--said consolidation via increased use of virtualization was one of the factors driving their server hardware purchase or upgrade plans. Forty-eight percent of survey respondents said hardware consolidation was a factor, while 46% cited a need for more processing capacity.
Of the various server form factors at respondents' organizations, blades are being used extensively by 36% and not at all by the same percentage. 2U servers are used extensively by 55% of respondents and in a limited fashion by 28%. IDC pegs blade market share numbers at about 15% of total server sales.
For Robert Wall, director of IT operations at Covenant Retirement Communities in Chicago, blades are a fit for almost all of his organization's server needs. "I continue to value the better remote management, greater power efficiency, fewer data and power connections, and smaller space requirements of blade servers versus rack servers," Wall says. "And that ability to slide one out, take it to another data center, and slide it back in is a beautiful thing."
One of the knocks against the earlier generation of blade servers was that they were underpowered, especially for the compute-intensive applications buyers wanted to run. Wall, for one, is confident enough in the capabilities of the current blade crop that he's betting almost the whole server farm on them.
"We use blades for almost every need," he says. "We only have eight total rack servers surrounded by 64 blades, and I am impatiently waiting to retire six of those eight rack servers. The remaining two rack servers have direct-attached SCSI tape libraries. Otherwise, I would eliminate all rack servers."
Covenant Retirement Communities started with HP p-Class blades about five years ago. At peak, Wall says, the organization had five p-Class enclosures running about 60 p-Class blades. Today, it's phasing out the p-Class blades in favor of c-Class devices, which offer better power/cooling capacity, fabrics, and fabric wire speed.
Platform For Virtualization
Covenant's blade servers are ably hosting, among other things, the company's VMware ESXi 4.1 virtualization infrastructure. At company headquarters is a c3000 enclosure with four older c-Class blades whose VMware ESXi 4.1 infrastructure runs a domain controller, a file/print server, and various test/development environment servers, Wall says. The p-Class blades tend to have one processor, 8 GB of RAM, two hard drives in RAID 1, and two NICs. Most of Covenant's c-Class blades have two processors, 16 GB to 32 GB RAM, and four NICs. "We are moving to an ESXi 4.1 grid concept, where most of our blades will boot into ESXi from a USB drive in the blade," Wall says. "The goal is to get rid of the cost, power, heat, and failure of hard disk drives."
In our August InformationWeek Analytics 2010 Virtualization Management Survey, 45% of respondents said they expect to have more than half their production servers virtualized by the end of 2011, and 24% pile on 10 or more production VMs per host--7% run more than 40 per host. Given that level of popularity, not to mention load, it's no wonder market observers insist that virtualization has become a main driver not only for blades in the data center but also for the relatively recent jump in blade innovation.
"As virtualization came in, it drove a couple of needs that blades had to grow up to accommodate," notes Mike Roberts, a Dell product manager responsible for the company's blade server line. "First was the memory capacity, and the other part of it was really I/O connectivity--network connectivity and the ability to connect to back-end storage."
That's not to say that older blade systems can't be upgraded to meet evolving needs. Indeed, flexibility is one of the most attractive features of the blade architecture for Covenant Retirement's Wall. "One major advantage of blades that is often overlooked is the easy ability to pull a blade, pop the cover, add or subtract hardware, replace the cover, and slide it back in," he says. "We also like the easy ability to send three new blades to the Denver data center, have 'remote hands' slide them in, and have remote hands pull three older blades and box them up to send back to us. We refurbish them and place them in our Chicago data center. This allows us to keep the Denver data center blade inventory fresh and vital and brings the older, out-of-warranty blades back to Chicago where we can easily handle them."
New Kid On The Block
A core network upgrade led Pitt Ohio Express back to blades--and to blade newcomer Cisco--after a less-than-satisfying experience with an earlier generation of the servers from another vendor. Pitt Ohio Express, a transportation company with 21 remote facilities scattered from Chicago to New Jersey to the Carolinas, was looking to move forward with its virtualization efforts, which required more horsepower and more compute resources. Pitt Ohio Express was mainly an HP shop, and the company initially planned to add ProLiant rack-mount servers to meet increased demand. However, while working with Cisco on a 10 Gigabit Ethernet deployment and an implementation of the vendor's Nexus switch, Pitt Ohio's IT team became interested in Cisco's UCS platform and its blade components. "When [Cisco] started to explain about the UCS, we started to do more research on the blade systems," says Jules Thomas, an IT systems engineer with Pitt Ohio. "And since we decided to look at UCS, we decided to look at HP's blade environment, too." As Thomas started comparing the HP blades with UCS, the integration aspect of UCS with the Nexus system was a major draw.
Pitt Ohio's IT team made the move from HP's to Cisco's server platform. The company uses two UCS B-Series 5108 blade server chassis and six UCS 6620 B200 M1 blade servers. Each server blade is hosting more than 20 VMs. Mike Cisek, director of infrastructure and operations at Pitt Ohio Express, says UCS has centralized and simplified management while reducing cabling connections and the associated interface costs. But for Pitt Ohio, it's the whole of UCS's parts--not the blade form factor itself--that's appealing.
"If the UCS didn't exist, we would still not have chosen to go with HP blades or HP servers," Cisek says. "Even with the last-generation change of [HP] incorporating Cisco networking into the blade chassis, it just didn't make sense from a cost standpoint."
Blades Sharpen Disaster Recovery
The National Institute of Arthritis and Musculoskeletal and Skin Diseases was an early adopter of blade servers, and CIO Robert Rosen says the agency is looking to acquire new blade systems for its disaster recovery site. NIAMS uses IBM pSeries servers, and Rosen is considering pSeries blades for the colocated site to leverage work already done on the platform.
Rosen says one often-overlooked stumbling block with blade servers, especially in secondary data centers, is the heat they generate because of their density. "You've got really the hottest pieces of the computer jammed into these chassis," he says. "You've got the CPUs, the things that are going to generate the most heat, and you pack them pretty densely."
Ken Miller, data center architect with independent grid operator Midwest ISO, agrees. "Blade servers can be an excellent way to reduce footprint," he says. "However, careful planning is critical. It is very easy to drive more power consumption, heat generation, and airflow on a per-server basis with blades than with traditional rack servers."
While blades can provide better compute-per-watt efficiency, Miller says, they're environmentally demanding, and "overstuffing" is always a risk. "A small telecom closet could easily be overwhelmed," he says. "Blade servers were designed for the modern data center. If you're dealing with environmental extremes, you may want to stick with rackmounts."
Dell's Roberts says issues such as heat have to be considered within the larger context. "If you're comparing apples to apples--16 rack-mount servers compared with 16 blades--the blades need less space," he says. "They're going to consume between 20% and 30% less power, and they're going to need about 30% less airflow to cool because of the efficiency in the design."
Blades To The Rescue
Blades aren't just for big enterprises. While it may be a stretch to say these servers rescued Amver's data centers, the model has been a boon for the global ship reporting system. Sponsored by the U.S. Coast Guard, Amver, short for Automated Mutual-Assistance Vessel Rescue System, is a computer-based, voluntary global ship reporting system used worldwide by search-and-rescue authorities to arrange for assistance to people in distress at sea. More than 19,000 vessels from hundreds of countries are enrolled in the Amver network.
Given the life-and-death nature of that mission, uptime is critical. The group decided to replace its servers with blades during a technology overhaul that included hardware, the database management system, and software that supports the organization. Amver implemented four HP blade systems, for production, standby, disaster recovery, and development and testing. Each server runs dual Xeon quad-core 2.66-GHz, 64-bit processors, SAS drives, and two 1-GB NICs.
Delfina Tomaini, USCG Amver program officer, says blade servers and virtual servers are the standard for a common operating environment at the Coast Guard Operations Systems Center. The move to blades, Tomaini says, has resulted in improved performance and reduced costs. "The OSC provides physical blade servers and virtual servers in a highly available environment," she says, adding that the agency has seen benefits in terms of an efficient and consistent approach to server deployment, which equates to lower total cost of ownership and reduced infrastructure on a common hardware platform.
Indeed, management systems are key to getting a high return on blade system purchases, says NIAMS' Rosen, who added that he's seeing suppliers rise to this occasion. "All of the vendors are now looking at [management] because they've come to realize that a rack full of blade servers, that's a lot of computers to manage, not to mention a lot of heat to get rid of," he says. "It's interesting to look at the ratio of system administrators to the number of systems they manage. You really have to know what's going on with the servers, how they're configured, and everything else. But what is clear is that you need management tools to be effective."
A dearth of effective management tools translates into the need for more system administrators. "You need a lot of automation to run these things effectively," Rosen says. Pitt Ohio's Cisek has been able to quantify savings in that area with Cisco's UCS: "We have three people managing almost 200 servers, and that's key to us."
Ultimately, enabling automation and efficiency will be critical to blades finally gaining traction in enterprise data centers. It may have taken a decade and the rise of virtualization to earn blades that place, but better late than never.