Biggest Storage Trend of 2012
By George Crump
InformationWeek
Performance management is being driven on two fronts. First, there's a need for more speed because of server and desktop virtualization, as well as the increasing importance of mission-critical databases. Second, thanks to solid-state disk (SSD) and higher-speed networks, there is the ability to deliver that speed.
What is missing is an understanding of which application or virtual machine qualifies for the highest level of bandwidth and the highest level of performing storage. The IT skill set involved to diagnose and address these problems needs to be developed this year. At the same time that IT personnel are learning the skills, there is a desperate need for the tools needed to provide this information to administrators.
As we discussed in What is a Virtualization Performance Specialist?, these tools need to be able to provide rapid heads up analysis and close to, if not actual, real-time monitoring of the environment. They also need to be able to provide insight into specific virtual machines as well as a holistic view of the virtual infrastructure so that proper balancing of performance-sensitive virtual machines with less performance-critical virtual machines can be made.
These tools also must look outside the virtual environment and into the application environment. Many business-critical and ultra-performance-sensitive applications are yet to be virtualized and in some cases might never be. Today the performance specialist might need to be able to manage multiple tools that can provide performance analysis of the entire environment. In the future, tools that manage application performance, virtualization performance, and storage infrastructure performance should all merge into a single application or suite with a single interface.
The alternative to developing a performance-management practice is to choose storage systems and infrastructures that can meet any performance demand. In other words, just make the whole environment fast. Although this might not be the most efficient way to optimize and manage performance, it does fit the traditional IT model of throwing hardware at the problem.
I am not against throwing hardware at the problem--if we can prove that it is more cost effective than aggressively managing the problem. SSD-only storage systems and high-speed 10GB networks are coming within the price point of many data centers. If the tools can't be developed, then it might be easier--and even less expensive--to buy storage that's fast enough for the entire environment and not spend the time and effort to fine tune performance.
Due to flat IT budgets, many IT departments will be forced to get the most out of what they have, with maybe a small performance fix for certain situations. This is where the application of the right tools along with the right IT knowledge is critical in getting the maximum performance out of the environment. Virtualization in many ways has taken away the "headroom" that IT used to count on to handle sudden surges. In 2012 the job will fall on the performance management specialist to make sure that sudden performance peaks don't shut down critical applications.
Track us on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).
| To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. |
Virtual Infrastructure Reports
Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.
Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.
Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.
The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.
Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.
Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.



Subscribe to RSS