Do You Need A Storage Performance Manager?
By George Crump
InformationWeek
The industry cure-all is solid-state disk (SSD). While SSD can help in many situations, it is far from the universal antidote that many vendors claim it to be. As we discuss in our "Visual SSD Readiness Guide," the entire environment has to be tuned to take advantage of the investment in SSD in order to get maximum benefit from the investment. While you will almost always see a performance gain just by "throwing SSD" at a performance problem, a bottleneck at the server, application, or in the storage network will limit how much you get out of that investment.
There are also many times where you can improve performance without having to make the move to solid state storage. For example, a virtualized environment may be suffering a storage performance problem just because there are too many virtual machines accessing the same physical drives at the same time. While SSD would be a valid solution, a simpler and less expensive approach may be to move some of the virtual machine disk images to a different set of disks or even a different disk system all together.
Improving storage performance is often viewed as an expensive option that only should be undertaken when the current performance levels can no longer be tolerated. This approach causes an interrupt-driven, all hands on deck, firefighting approach to the problem. Instead, performance optimization should be a task that IT administrators perform on a regular--if not daily--basis that allows for performance tuning to become a regular part of the work flow. This change in attitude toward performance optimization is going to require the addition of two capabilities to the IT tool chest.
The first change in attitude is that performance optimization can no longer be considered a "project" that occurs once a year or once a quarter. No longer can tools be rolled out, measurements taken, and diagnoses to the sick patient delivered. Instead performance needs to be like a "wellness" program where it is constantly measured on a daily basis. As we discussed in our recent video "The Challenges of Managing a Cloud Data Center," manual tools and spreadsheets need to be replaced by storage management software and hardware that can capture inline information and then report that back in real time to the IT administrators.
Armed with real time information, changes to the architecture, or storage design, can be made before a performance problem ever becomes noticeable and cause interruption. If part of those changes will cause downtime, since they are now not an emergency, they can be scheduled for the next maintenance window.
The second attitude change, and this may be less popular, is to consider a new position in the enterprise data center that is focused on performance management. As we describe in our article "What is a Virtualization Performance Specialist?" it may be time to have an individual that crosses the functional boundaries of applications, servers, hypervisors, and storage to take a holistic view of the enterprise from a performance perspective. As we have discussed in the past, storage performance is not just how fast the disk drive spins--there are many variables to consider and those variables cross IT group boundaries.
This individual would need a tool that gave them that holistic view that they would constantly monitor for performance improvement opportunities. Unlike performance optimization as it is done today, where you are waiting for something to go wrong, this individual will look for something that can go "more right" so that the existing infrastructure can be leveraged to its fullest.
The idea of additional staffing may not be a popular discussion in many IT departments, but this position is one that may pay for itself several times over for three reasons. First, instead of throwing hardware at the problem you are throwing intelligence at it. As a result, many storage and infrastructure upgrades may be avoided all together. Second, when an upgrade is needed, the performance specialist can make sure that the right investment is made and that the investment pays the maximum dividend. Finally, this person may make the others on your staff more productive because they can focus on the job at hand instead of dropping everything for a performance tuning firefighting effort.
Follow Storage Switzerland on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement. George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).
| To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. |
Virtual Infrastructure Reports
Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.
Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.
Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.
The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.
Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.
Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.



Subscribe to RSS