Is Shared Storage's Price Premium Worth It?
By George Crump
InformationWeek
Increasing numbers of vendors are encouraging IT professionals to just say "no." As we discussed in our recent article "The Benefits of A Flash Only, SAN-Less Architecture," customers are looking for alternatives to shared storage. Local storage in the form of internal hard disks and even PCIe SSD has emerged as leading candidates to replacing the SAN. Local storage has developed workarounds for its biggest weakness: lack of share-ability.
In a future column we will begin to explore some of the "SAN-less" shared storage options. But how do we get to where we are? Why is the frustration with shared storage so high?
Generally, administrators site three sources of SAN frustration. First there is the cost of shared storage, which is almost always a premium compared to local storage. Second there is the frustration with having to constantly tune the storage and its supporting infrastructure, something that is increasingly problematic in the ever-changing virtual environment. Finally there is the frustration over the complexity of day-to-day management of the SAN.
[ Learn How To Choose Right Unified Storage System. ]
In this column we will focus on the first frustration, the price premium. The premium price of shared storage is caused partly by the cost of the infrastructure required to share storage: the adapters that go into the servers and the switches that the adapters and the storage connect to. Of course this is data, so everything has to be redundant, which compounds the cost problem.
Another source of the price premium is the cost of the actual storage unit. It also must be highly available, so that means multiple ports, power suppliers, and storage controllers. Local storage also needs these same components and sometimes even in redundancy, but all these components exist inside the server they are being installed in, which reduces costs considerably.
Finally, shared storage almost always includes a variety of storage niceties that may not exist in local storage. Capabilities like unified storage (SAN/NAS), snapshots, replication, and automated storage tiering are commonplace in today's storage systems. While many vendors include these capabilities in the storage system at no additional charge, nothing is actually free; most shared storage vendors hold significantly higher profit margins than their local storage competition.
As we say in "The Benefits of A Flash Only, SAN-Less Architecture," shared storage proponents can no longer claim that the advantage of being shared is enough justification for this premium cost. In many cases they can't claim a performance advantage. Now operating systems and hypervisors are offering many of the nice-to-have features listed above, so that is holding less value as well. To justify their high price, shared storage solutions need to focus on one key area: offer greater capacity efficiencies than local storage. In other words, do the same job while requiring significantly less upfront and ongoing storage costs.
It should be able to do this in two areas. First it should be able to reduce the physical capacity footprint required in a shared environment. As we discuss in our article "Which Storage Efficiency Technique is Best," deduplication is an ideal way to reduce storage capacity needs, especially in the virtual environment. Deduplication also greatly benefits from the centralization of data. The more data there is to compare, the greater the chance of redundancy. The technology should become standard on all primary shared storage systems.
Shared storage should allow better use of storage since it can be assigned as needed to a given host. Local storage will almost always waste capacity and it can't allocate it to another server. This granular allocation is especially important with flash storage since this capacity is still premium priced. Shared storage can carve up the allocation of flash solid state to the exact requirements of each connecting host, or it can use it as a global pool accelerating only the most active blocks of storage. As a result, the total SSD investment may be less in shared storage than if storage is purchased on each individual server.
Local storage is not only winning on cost and its newfound ability to share, it is also gaining acceptance because of its performance and simplicity. These are topics we will cover in a future article. George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).
Even small IT shops can now afford thin provisioning, performance acceleration, replication, and other features to boost utilization and improve disaster recovery. Also in the new, all-digital Store More special issue of InformationWeek SMB: Don't be fooled by the Oracle's recent Xsigo buy. (Free registration required.)
| To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. |
Virtual Infrastructure Reports
Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.
Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.
Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.
The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.
Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.
Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.



Subscribe to RSS