Infrastructure // Storage
Commentary
8/26/2011
02:24 PM
Jasmine  McTigue
Jasmine McTigue
Commentary
Connect Directly
RSS
E-Mail
50%
50%
Repost This

vSphere 5.0's Early-Warning System

Storage DRS adds abstraction while keeping loads in balance.

VMware's latest vSphere 5.0 has a whole host of new features, but I think the most interesting one is the scheduling engine within Storage DRS.

Essentially, Storage DRS incorporates a new disk object, called a Datastore Cluster, that can consist of multiple data stores. Virtual machine workflows, operations, and scripted API actions can now be directed at a Datastore Cluster instead of a data store, allowing the infrastructure to take notice of I/O load before placing the VM's disk files.

Why is this so cool?

I've talked before about abstraction being a key virtualization feature. Introducing an additional layer of abstraction between what VMware sees as physical data stores and guest VMs changes the game with regard to storage consolidation. Now we can reference multiple, dissimilar data stores -- delivered via dissimilar means -- just by specifying a single Datastore Cluster name.

Further, once clustered, a resource scheduler algorithm begins collecting data on I/O response times and space utilization. The data logged from the algorithm is used to perform I/O-balance and free-space checks across members of the cluster and move VMs as necessary to load-balance both resources.

There are a couple of things to be aware of with Storage DRS. First, it collects space utilization statistics every two hours and checks/makes recommendations once every eight hours. The I/O-load history check and evaluation takes into consideration the last 24 hours and triggers once every eight hours, so don't expect extremely fast response times. VMware does assure us that these values are customizable; however, you'll certainly want to set a limit on the frequency of moves.

And, like regular DRS, automatic and manually approved machine movement modes are both supported, with selectable automation levels that range from very conservative to very aggressive. Storage DRS also supports rules, like Intra-VM VMDK affinity, which keep a machine's disks on the same data store, and VMDK anti-affinity, to keep a VM's VMDK files on separate data stores. There is also a VM anti-affinity rule that forces entire VMs to occupy separate data stores. This can come in very handy.

Just like with regular DRS, when a member of a Datastore Cluster is put into maintenance mode, Storage RDS evacuates all of the live VMs from the data store while ensuring that I/O and space remain balanced across the remaining members of the cluster.

Storage DRS is essentially the same idea as conventional DRS applied to the data store level. The compelling business benefit is that it strongly complements other abstraction layers and facilitates the automation of large datastore pools. And don't we want all the automation we can get?

Comment  | 
Print  | 
More Insights
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Elite 100 - 2014
Our InformationWeek Elite 100 issue -- our 26th ranking of technology innovators -- shines a spotlight on businesses that are succeeding because of their digital strategies. We take a close at look at the top five companies in this year's ranking and the eight winners of our Business Innovation awards, and offer 20 great ideas that you can use in your company. We also provide a ranked list of our Elite 100 innovators.
Video
Slideshows
Twitter Feed
Audio Interviews
Archived Audio Interviews
GE is a leader in combining connected devices and advanced analytics in pursuit of practical goals like less downtime, lower operating costs, and higher throughput. At GIO Power & Water, CIO Jim Fowler is part of the team exploring how to apply these techniques to some of the world's essential infrastructure, from power plants to water treatment systems. Join us, and bring your questions, as we talk about what's ahead.