Many IT admins want to continue over-provisioning memory to virtual machines. But that's not an efficient way to manage the data center.
VMware has changed how it charges for its virtualization software, imposing memory limitations per server CPU based on the type of license you buy. Now that the outcry in the blogosphere has calmed down, it may be possible to take another look at what it means.
The protesters are correct. VMware has changed its pricing scheme to more closely reflect actual usage. In fact, it will raise prices for many customers, as reported by InformationWeek'ssummary, although most of VMware's gain will occur in the future, as both host servers and the virtual machines running on them become more powerful.
My colleague Jonathan Feldman, an IT director at a city in North Carolina, cites from first-hand experience a practice of software vendors changing pricing schemes for their own benefit--but leaving 80% of customers unaffected. I think this is what VMware has done.
However, the approach VMware's taking--putting a limit on the amount of memory that can be allocated per CPU based on the customer's license--contradicts advice it has given in the past, advice that was meant to ease fears about further server virtualization. With uncertainty over how well virtual machines would perform, VMware and its third-party installers and consultants urged customers to over-allocate memory as a way of ensuring they would always have enough. The new VMware pricing isn't based on actual memory use but "allocated" memory, an area of hidden inflation in the typical, virtualized data center.
Until now, over-allocating virtual memory remained a harmless over-provisioning. A virtual machine had no need to draw down an allocation that might be twice as much as the RAM it actually used. If above normal peak usage was occasionally needed, the virtual machine could get the memory and seldom generated contention with other VMs, which were also over-allocated. The various applications found on the host server tended to have different peak usage times, and the mix evened out rare spikes. This practice was working well--perhaps a little too well. No one was sure how much memory their VMs actually needed.
Some system administrators want to continue over-provisioning. At some point, this is contrary to the new way of managing the data center. Virtualization lets system administrators use physical resources more flexibly and efficiently, and the ultimate goal is to have every resource used near its capacity without endangering operations. Admittedly, that's not a layup. But as long as system administrators feel entitled to over-provision virtual machine memory, this won't happen. On the other hand, who wants to be the one to step forward and say they know exactly how much of a fluctuating resource needs to be doled out on a long-term basis?
Servers are getting much more powerful with the multi-core designs. And they're getting amounts of memory that dwarf the not-so-distant past when a standard x86 server came with 16 or 32 GB of RAM. Cisco ships Unified Computing System servers with 384 GB of memory. A more typical amount might be 192 GB but even so, both are a far cry from 32 or 64 GB.
VMware's recent announcement focused on the upgrade to Version 5 of its core product, Infrastructure, which generates, configures, and deploys ESX Server virtual machines. Two years ago, it moved from Infrastructure 3 to 4, and the software's value significantly increased as customers swapped out a standard two-way, dual-core server (four cores) for a two-way, quad-core server with eight cores. Think of each core as having sufficient CPU cycles for a single VM. That's not a requirement, just an approximate measure. VMware's upgrade increased the value of Infrastructure significantly, with no change in price. This is another contributor to the current upset this time around. Why can't the value of virtualization software just increase at the pace of Moore's law, with no price changes?
InformationWeek Elite 100Our data shows these innovators using digital technology in two key areas: providing better products and cutting costs. Almost half of them expect to introduce a new IT-led product this year, and 46% are using technology to make business processes more efficient.
The UC Infrastructure TrapWorries about subpar networks tanking unified communications programs could be valid: Thirty-one percent of respondents have rolled capabilities out to less than 10% of users vs. 21% delivering UC to 76% or more. Is low uptake a result of strained infrastructures delivering poor performance?
. We've got a management crisis right now, and we've also got an engagement crisis. Could the two be linked? Tune in for the next installment of IT Life Radio, Wednesday May 20th at 3PM ET to find out.