InformationWeek: The Business Value of Technology

InformationWeek: The Business Value of Technology
InformationWeek Big Data Coverage
= Member Content
Facebook Twitter Share

E-mail | Print | Permalink | LinkedIn | RSS

2 Storage Management Trends To Watch In 2012


Why are consolidated storage and compute infrastructures hot now? Storage management issues can cause virtualization projects to slow down or stall.




As we discussed in Flash Dependent Storage Systems Take Off In 2012, server and desktop virtualization is responsible for many of the emerging trends in storage this year. One of those trends is the concept of consolidated storage and compute infrastructures. Storage management issues are often what cause virtualization projects to slow or stall; removing those storage-related issues is a high priority.

To help reduce storage management problems, large systems and storage vendors have been offering prepackaged combinations of servers and storage systems. These have seemed to reduce the complexity around storage management and allow organizations to rollout new virtualization initiatives faster, but they often develop the same storage management challenges you would have seen if you started with your own design.

Consolidated storage and compute is more than just the prepackaging of servers and storage--they are designed to offer both the compute and the storage within a single element of a cluster. They have the ability to share resources across the elements within that cluster so that the individual resources of each element are available as an aggregate pool.

There are two methods that we see in the consolidated storage and compute trend right now. As we discussed in Server Virtualization Without A SAN, companies are developing the first approach: turnkey, with compute, storage, and software. Think of these systems as similar to a scale-out storage system, except the storage nodes can now also host virtual machines. The value in this approach is that as you add nodes to a virtualized cluster, you are also adding appropriate amounts of computer, memory, network, and storage infrastructure. The system should allow server and desktop virtualization environments to scale without the need of a storage expert.

The second approach is more of a software model that leverages existing servers, but networks storage already inside those servers. The software is typically installed as a virtual appliance that runs as a guest on each host server. The drives inside that server are aggregated with other drives and other servers to provide shared storage access, as well as redundant data protection similar to more traditional shared storage environments.

One of the advantages of both of these approaches is that they can leverage local PCIe-based solid-state disk (SSD) and intelligent data placement so that highly active data can be read from a PCIe channel instead of SAS-attached SSD or mechanical hard drives. That capability moves these systems out of the starter system category and into solutions for larger enterprises looking for high-density virtualization without the storage complexities that often follow it.

While the software-only approach allows the strategy to be implemented into existing virtualized infrastructures, there is some concern about being able to provide predictable storage performance when the environment is placed under stress. Certainly this can be accounted for, but it is another step in the planning and tuning process. The turnkey hardware/software approach should be able to avoid the planning and tuning process since maximum load can be considered by the vendor as it designs the system.

There is a certain amount of inflexibility in this approach and you become dependent on the vendor providing the solution. You need to make sure that you're comfortable with the vendor and are confident that it will provide long-term solutions to meet your business demands. While the mixed vendor approach, as we discussed in The Storage Hypervisor, may have additional storage management concerns, it does provide a greater level of flexibility. Which method you choose is largely dependent on the expertise of your personnel and their available time to manage a solution.

Follow Storage Switzerland on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.


Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).




InformationWeek encourages readers to engage in spirited, healthy debate, including taking us to task. However, InformationWeek moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. InformationWeek further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
Subscribe to RSS


Advertisement


Virtual Infrastructure Reports

report Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.

report Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.

report Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.

report The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.

report Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.

report Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.