Are There Too Many Storage Solutions?
By George Crump
InformationWeek
Take, for example, the most basic need of all: a shared storage system to support a virtualized server infrastructure. There are at least three major protocols to consider--Fiber, iSCSI, and NFS--plus a few newer connection options such as shared storage in the hosts and ATA over ethernet (AoE).
Even the choice of disk drive that you are going to use is up for debate. Do you use fast high-speed mechanical disk drives or do you use high-capacity, low-cost mechanical hard drives complimented by some form of solid state disk? And of course there is a whole cadre of ways to integrate solid state disk. Caching and automated tiering, for example. Don't forget that the flash-only storage system market is also gaining momentum.
The examples of this abundance of choice could fill up the rest of this entry. Let's answer the first question for now: Are there too many storage solutions? Although the answer might seem like yes, it's actually no. You would always much rather have too many choices than be forced into only one option. The real question then is how do you work your way through the maze and come up with the best possible solution for you?
The first step in making the right decision is to realize that there are often multiple right decisions and that there are multiple vendors and even multiple products within those vendors that can solve your specific storage problem. Although it happens, it's rare that there is only one vendor that can solve your storage problem.
The number-one influencer in selecting a storage product is your environment. What you have today is going to influence to a large degree what you get tomorrow. For example, if you have a huge investment in fibre channel storage, host bus adapters, and the all-important knowledge, then making a dramatic shift to iSCSI or NFS probably would not be your best move even if it looks so on paper.
Typically we find three types of organizations. The first type needs to do some form of storage refresh because they have storage coming off of lease, being totally depreciated, or just simply outliving its usefulness. Even these customers most likely should look at solutions that leverage as much of their current infrastructure as possible--and more importantly--as much of their skill set as possible.
The second type of organization isn't quite ready for a storage refresh. However, it has a specific project in mind--say, server virtualization, a common example today--for which it fears its legacy storage strategy might be either too expensive to implement or not up to performing the task.
One solution to this particular problem might be a storage system that has a heavy focus on solving storage issues caused by server virtualization, or even some of the newer highly integrated platforms that combine server compute and storage capacity for a turnkey virtualized environment.
The third type of organization is one that is essentially stuck: It has a current storage investment that is not fully depreciated and the current project is not new. Often what has happened is the performance or capacity requirements of the project grew beyond the initial scope. But occasionally it can be a vendor dissatisfaction issue where the current product isn't living up to expectations or sales pitches.
Although certainly not comfortable for the IT department going through this, I find this scenario the most interesting. There is a finite budget available to fix the problem and to a large extent there is a reputation on the line. The good news, as with the other situations, is that there are plenty of solutions available to fill the gap, or to extend the capabilities of the legacy system.
For example, as I wrote in What is Transparent SSD Caching, a solid-state drive system inserted into the environment could resolve a performance problem and extend the life of the storage system well beyond even original predictions.
So choice is good but can cause headaches. The goal of this blog, my company, and many other highly qualified analysts is to help you sort through the maze and pick one of the several solutions that might solve your problem. The good news is that for almost every storage problem there are solutions.
Track us on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
IT's spending as much as ever on disaster recovery, despite advances in virtualization and cloud techniques. It's time to break free. Download our Disaster Recovery Disaster supplement now. (Free registration required.)
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).
| To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. |
Virtual Infrastructure Reports
Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.
Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.
Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.
The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.
Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.
Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.



Subscribe to RSS