IT pros who've virtualized upwards of 70% of the data center share how they've solved problems that more cautious adopters have yet to see.
10 Lessons Learned By Big Data Pioneers
(click image for larger view and for slideshow)
Virtualization is now firmly in the mainstream. In a recent InformationWeek VMware vSphere 5 Survey, 51% of respondents said they had virtualized half the data center. At the same time, some have gone beyond and virtualized more--in some cases, a lot more--of the data center.
In our recent indepth look at the survey results, we showed how some implementers had pushed beyond the 50% mark--to the 70% mark and beyond; in one case, 98%.
These implementers had gotten past the questions of how much memory and CPU to put on a host server. They had moved into the issues of how to bring into harmony all those moving parts (including server I/O) that virtualization brings together, and how those moving parts were changing data center relationships. We aired their direct observations and conclusions in a feature story Oct. 31. Here's four lessons learned from talking to those skilled implementers.
Lesson 1: Intense virtualization leads to contention between parts.
A heavily virtualized data center has a lot of complex moving parts. CPU cycles used to be a limiting factor, but today's servers can ship with four, six, or eight cores per CPU socket, creating plentiful compute cycles. Likewise, servers can be efficiently utilized by layering on virtual machines, but that leads to contention between storage and network traffic for a limited amount of I/O capacity. The new constraint, said Raymond DeCrescente, CTO at Capital Region Orthopaedics, a 32-physician clinic in Albany, N.Y., is host I/O.
"I did a lot of research. I knew where I wanted to be at" (after his current data center investment), said DeCrescente. Capital Region is a part of the Albany, N.Y., Medical Center, and DeCrescente wanted more of his data center virtualized, with the servers, network, and storage working together as seamlessly as possible. He was willing to pay upfront to avoid having to constantly juggle I/O and network bandwidth contention. If the system could manage those issues even as demand on his systems grew, that would give his five-person IT staff a chance to do less maintenance and invest more time in adding to existing systems or implementing new ones.
Over the last 12 months, he spent $2 million for a Unified Computing System (UCS) supplied by Cisco, EMC, and VMware, with a 10-Gbps Cisco Fibre Channel over Ethernet (FCoE) switching fabric. "Once you set the UCS servers up, it's not a constant job to maintain them. We monitor it all the time ... but the product really maintains itself," he claimed, creating new virtual servers with allotments of network bandwidth and storage through the VMware vCenter console. Meanwhile, a small IT staff works on an electronic medical records system and a disaster recovery system that guarantees DeCrescente can keep supplying services to doctors in surgery, even if the main data center goes down.
Lesson 2: Heavy virtualization changes data center relationships.
A fundamental shift in virtualized data centers occurs as users become accustomed to being granted servers quickly. In some cases, they will gain a sense that "virtual servers are free," faster than weeds can sprout in an untended garden. Lots of requests for servers, if granted, lead to virtual machine sprawl, even if the IT staff includes a shutdown date somewhere down the road.
One way to cope, say some advanced implementers, is to impose a review and authorization process that slows the server startup process, but that just imposes new delays in place of the old, physical machine provisioning ones. The real enforcement mechanism, however, is the chargeback system, said Fair Isaac's VP of IT, Tom Grahek.
As long as a metering system knows how many hours a server has been used, and the business unit that's using them, IT has an effective check on virtual server sprawl. As the bills for use are sent out, "the business unit manager will police that. That creates a lot of efficiency," said Grahek, because IT doesn't have to collect a lot of data and deliberate before saying yes or no to a server request. The business unit manager will do that based on his perception of the business value of the request and the likelihood of its helping or hurting his budget. That's where the decision-making power should reside, said Grahek in an interview.
InformationWeek Elite 100Our data shows these innovators using digital technology in two key areas: providing better products and cutting costs. Almost half of them expect to introduce a new IT-led product this year, and 46% are using technology to make business processes more efficient.
The UC Infrastructure TrapWorries about subpar networks tanking unified communications programs could be valid: Thirty-one percent of respondents have rolled capabilities out to less than 10% of users vs. 21% delivering UC to 76% or more. Is low uptake a result of strained infrastructures delivering poor performance?