In a recent entry in his blog, StorageTexan asks "why someone would choose to go NFS instead of doing block based connectivity for things like VSPhere?" http://storagetexan.com/2010/03/25/the-debate-why-nfs-vs-block-access-for-osapplications/ and while I gave a brief opinion as a comment on his site, I thought I would take a little deeper dive here. Which storage protocol is best for VMware?I have to give the storage community credit, for the most part there is not a knee-jerking response to that question anymore. In part this is due to the fact that most vendors can offer at least two different storage protocols today and in part I think its do to the fact that large majorities of people working at vendors really do have the best interest of the customer in mind when it comes to protocol selection. The first fact makes the second fact easier.
While there are some interesting alternatives on the horizon, the choice, for now, comes down to basically three protocols: iSCSI, Fibre or NFS/NAS. The reality is that in many cases the initial protocol selection comes down to what you, the customer, are most comfortable with. While fibre is the performance leader, the IP based protocols can typically be tweaked in such a way as to provide most of the benefits of the others. Although as you begin to extend the IP based protocols you run into much of the same complexity that you do with fibre because you are essentially designing a storage network that just so happens to be on an IP infrastructure.
If you are not forced to have to use specialized iSCSI HBAs or higher end Ethernet cards there should be a cost advantage for IP vs. fiber. However in server virtualization we should be greatly reducing the amount of physical servers that have to be used in the first place so that cost advantage is not as great as it may have been in non-virtualized environments. If you are virtualizing 100 servers across 10 servers that may be as few as 20 HBAs to purchase. Additionally as I wrote about in my last entry FCoE can consolidate the IP and Fibre HBAs even further.
NFS does bring some uniqueness to the equation. First virtual machines are essentially a bunch of files. NAS/NFS does thin provisioning of VMs natively and setting shared access from multiple physical hosts seems more natural with NFS. An area of concern is the ability for that single NAS head to handle the random I/O requests from potentially hundreds if not thousands of virtual machines. There is concern that this might bog down that NAS head more rapidly than a block based array would. The result could lead to NAS head sprawl as you try to deal with virtual machine sprawl.
I no longer buy into the simplicity advantage the NAS has. NAS has not become more difficult, but iSCSI and Fibre have become easier to use. iSCSI storage systems from vendors focused on the SMB market in particular have made significant ease of use improvements in setting up a shared environment. As I stated earlier as any environments leveraging any of the three protocols grows the environment becomes more complex. These are storage networks regardless of what type of cable they run on. As they scale you are going to need capabilities like VM aware QoS to be able to maintain service levels to the right applications.
While each protocol has some advantages, its what the storage suppliers are doing with their systems that seem to be the key factor for IT professionals. If they can help you with tasks like VMware storage management via integration, provisioning, data placement and data protection then these are going to be the key factors. The protocol, while important, may be the second decision point and will largely be driven by what storage system you select. It is an interesting thought at what point would you switch protocols, something we will dive into deeper in our next entry.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.