Why are consolidated storage and compute infrastructures hot now? Storage management issues can cause virtualization projects to slow down or stall.
As we discussed in Flash Dependent Storage Systems Take Off In 2012, server and desktop virtualization is responsible for many of the emerging trends in storage this year. One of those trends is the concept of consolidated storage and compute infrastructures. Storage management issues are often what cause virtualization projects to slow or stall; removing those storage-related issues is a high priority.
To help reduce storage management problems, large systems and storage vendors have been offering prepackaged combinations of servers and storage systems. These have seemed to reduce the complexity around storage management and allow organizations to rollout new virtualization initiatives faster, but they often develop the same storage management challenges you would have seen if you started with your own design.
Consolidated storage and compute is more than just the prepackaging of servers and storage--they are designed to offer both the compute and the storage within a single element of a cluster. They have the ability to share resources across the elements within that cluster so that the individual resources of each element are available as an aggregate pool.
There are two methods that we see in the consolidated storage and compute trend right now. As we discussed in Server Virtualization Without A SAN, companies are developing the first approach: turnkey, with compute, storage, and software. Think of these systems as similar to a scale-out storage system, except the storage nodes can now also host virtual machines. The value in this approach is that as you add nodes to a virtualized cluster, you are also adding appropriate amounts of computer, memory, network, and storage infrastructure. The system should allow server and desktop virtualization environments to scale without the need of a storage expert.
The second approach is more of a software model that leverages existing servers, but networks storage already inside those servers. The software is typically installed as a virtual appliance that runs as a guest on each host server. The drives inside that server are aggregated with other drives and other servers to provide shared storage access, as well as redundant data protection similar to more traditional shared storage environments.
One of the advantages of both of these approaches is that they can leverage local PCIe-based solid-state disk (SSD) and intelligent data placement so that highly active data can be read from a PCIe channel instead of SAS-attached SSD or mechanical hard drives. That capability moves these systems out of the starter system category and into solutions for larger enterprises looking for high-density virtualization without the storage complexities that often follow it.
While the software-only approach allows the strategy to be implemented into existing virtualized infrastructures, there is some concern about being able to provide predictable storage performance when the environment is placed under stress. Certainly this can be accounted for, but it is another step in the planning and tuning process. The turnkey hardware/software approach should be able to avoid the planning and tuning process since maximum load can be considered by the vendor as it designs the system.
There is a certain amount of inflexibility in this approach and you become dependent on the vendor providing the solution. You need to make sure that you're comfortable with the vendor and are confident that it will provide long-term solutions to meet your business demands. While the mixed vendor approach, as we discussed in The Storage Hypervisor, may have additional storage management concerns, it does provide a greater level of flexibility. Which method you choose is largely dependent on the expertise of your personnel and their available time to manage a solution.