Examine The True Cost Of Storage Performance
By George Crump
InformationWeek
While there are plenty of options that can solve a performance problem, and vendors are quick to release their respective 2 million IOPS benchmarks, all solutions to a performance problem have a performance cost. Evaluating those costs and determining what is the best route for your data center is a critical step in determining the overall cost of performance. In this entry we will look at the cost of generating performance out of a hard disk drive storage system.
Performance of an HDD-based array can improve as more drives are added to the system, assuming that there is sufficient queue depth to be able to sustain each drive as it is added. Queue depth is essentially the number of near simultaneous storage requests made by the environment. This can be a large group of users all accessing the same database at the same time or a high number of applications all accessing the same storage array at the same time.
Part one of the problem is generating enough depth. For single threaded applications, adding additional hard drives typically won't improve their performance. They are at the mercy of the rotational speed of the drive. Part two of the problem is being able to afford enough drives to bring the queue depth down to zero.
If a single threaded application has a performance problem, the only solution then is to reduce latency. In the case of HDD technology, that means add a faster rotating hard disk (rpm). The faster the drive rotates the more expensive the drive is. The big problem with RPM speed is that today we are limited to 15,000 RPM drives. In other words, you can only go so fast.
Many performance problems can be alleviated by adding more drives to the array because the application is multi-threaded or there is a lot of simultaneous access from multiple applications. The goal is to lower queue depth by adding drives. The true cost here is the cost of actually buying those drives. For some environments, to make a significant impact on queue depth is going to require the purchase of high double-digit and potentially hundreds of hard disk drives. There is also a hidden cost in that this is not a very efficient use of drive capacity as most of the drives will likely not be fully used. As a result, resolving a high-queue-depth performance problem with HDD technology typically means hundreds of gigabytes if not terabytes of wasted disk capacity.
There are other costs associated with solving performance problems with hard disks, especially in the high-queue-depth scenario. First of all, of course all of these drives need to be powered and cooled, so electrical costs are also part of the true cost of solving a performance problem with hard disks.
Secondly, there is the cost of the physical floor space required to house all these disk drives. That is not a problem until you actually run out of data center floor space. Unfortunately, this is becoming a common occurrence in many data centers. In some cases, the true cost of solving performance problems with hard drives comes when adding one more drive means you have to build a brand-new data center, which of course would cost millions of dollars.
As a result, the true cost of solving performance problems with HDD technology can be so high that customers are increasingly are looking to solid state storage technologies to solve those problems. They offer higher IOPS per gigabyte of capacity and consume less power and floor space. Also, since they are not rotational media, there is no latency and a single threaded application that does not generate high queue depths will also be greatly helped by solid state storage.
Solid state storage though is not without its own true costing problems and is something that we will discuss in an upcoming entry.
Follow Storage Switzerland on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Who rules the LAN? We surveyed 444 IT professionals for their perspective on campus LAN switches from six vendors in our 2011 InformationWeek LAN Equipment Vendor Evaluation Survey. Download the report now. (Free registration required.) George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).
| To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. |
Virtual Infrastructure Reports
Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.
Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.
Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.
The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.
Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.
Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.



Subscribe to RSS