VMware Pricing Outrage: A Closer Look - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Mobile
Commentary
7/26/2011
02:35 AM
Charles Babcock
Charles Babcock
Commentary
Connect Directly
Twitter
RSS
E-Mail
50%
50%

VMware Pricing Outrage: A Closer Look

Many IT admins want to continue over-provisioning memory to virtual machines. But that's not an efficient way to manage the data center.

VMware has changed how it charges for its virtualization software, imposing memory limitations per server CPU based on the type of license you buy. Now that the outcry in the blogosphere has calmed down, it may be possible to take another look at what it means.

The protesters are correct. VMware has changed its pricing scheme to more closely reflect actual usage. In fact, it will raise prices for many customers, as reported by InformationWeek's summary, although most of VMware's gain will occur in the future, as both host servers and the virtual machines running on them become more powerful.

My colleague Jonathan Feldman, an IT director at a city in North Carolina, cites from first-hand experience a practice of software vendors changing pricing schemes for their own benefit--but leaving 80% of customers unaffected. I think this is what VMware has done.

However, the approach VMware's taking--putting a limit on the amount of memory that can be allocated per CPU based on the customer's license--contradicts advice it has given in the past, advice that was meant to ease fears about further server virtualization. With uncertainty over how well virtual machines would perform, VMware and its third-party installers and consultants urged customers to over-allocate memory as a way of ensuring they would always have enough. The new VMware pricing isn't based on actual memory use but "allocated" memory, an area of hidden inflation in the typical, virtualized data center.

Until now, over-allocating virtual memory remained a harmless over-provisioning. A virtual machine had no need to draw down an allocation that might be twice as much as the RAM it actually used. If above normal peak usage was occasionally needed, the virtual machine could get the memory and seldom generated contention with other VMs, which were also over-allocated. The various applications found on the host server tended to have different peak usage times, and the mix evened out rare spikes. This practice was working well--perhaps a little too well. No one was sure how much memory their VMs actually needed.

Some system administrators want to continue over-provisioning. At some point, this is contrary to the new way of managing the data center. Virtualization lets system administrators use physical resources more flexibly and efficiently, and the ultimate goal is to have every resource used near its capacity without endangering operations. Admittedly, that's not a layup. But as long as system administrators feel entitled to over-provision virtual machine memory, this won't happen. On the other hand, who wants to be the one to step forward and say they know exactly how much of a fluctuating resource needs to be doled out on a long-term basis?

Servers are getting much more powerful with the multi-core designs. And they're getting amounts of memory that dwarf the not-so-distant past when a standard x86 server came with 16 or 32 GB of RAM. Cisco ships Unified Computing System servers with 384 GB of memory. A more typical amount might be 192 GB but even so, both are a far cry from 32 or 64 GB.

VMware's recent announcement focused on the upgrade to Version 5 of its core product, Infrastructure, which generates, configures, and deploys ESX Server virtual machines. Two years ago, it moved from Infrastructure 3 to 4, and the software's value significantly increased as customers swapped out a standard two-way, dual-core server (four cores) for a two-way, quad-core server with eight cores. Think of each core as having sufficient CPU cycles for a single VM. That's not a requirement, just an approximate measure. VMware's upgrade increased the value of Infrastructure significantly, with no change in price. This is another contributor to the current upset this time around. Why can't the value of virtualization software just increase at the pace of Moore's law, with no price changes?

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
News
COVID-19: Using Data to Map Infections, Hospital Beds, and More
Jessica Davis, Senior Editor, Enterprise Apps,  3/25/2020
Commentary
Enterprise Guide to Robotic Process Automation
Cathleen Gagne, Managing Editor, InformationWeek,  3/23/2020
Slideshows
How Startup Innovation Can Help Enterprises Face COVID-19
Joao-Pierre S. Ruth, Senior Writer,  3/24/2020
White Papers
Register for InformationWeek Newsletters
State of the Cloud
State of the Cloud
Cloud has drastically changed how IT organizations consume and deploy services in the digital age. This research report will delve into public, private and hybrid cloud adoption trends, with a special focus on infrastructure as a service and its role in the enterprise. Find out the challenges organizations are experiencing, and the technologies and strategies they are using to manage and mitigate those challenges today.
Video
Current Issue
IT Careers: Tech Drives Constant Change
Advances in information technology and management concepts mean that IT professionals must update their skill sets, even their career goals on an almost yearly basis. In this IT Trend Report, experts share advice on how IT pros can keep up with this every-changing job market. Read it today!
Slideshows
Flash Poll