In Storage Switzerland's recent article What is RAID? we explain that RAID is a protection scheme that allows for volumes to have a drive failure and still be able to provide access to the data on that volume. The problem is that with today's drive technology the speed at which drives can be rebuilt is now measured in double-digit hours if not days. During this time performance can degrade and there is the risk of additional drive failure. If an additional drive fails beyond the RAID algorithms' allowance, then there is a complete data loss and recovery from backup software must begin.
There is also the reality that drives are more likely to fail as the capacity per drive increases. As drive capacity increases, so does the bit error rate (BER), which is essentially how much data can be read from a drive before you experience an unrecoverable read error. The BER ratio has stayed relatively the same while drive capacities have skyrocketed. A 2-TB drive is significantly more likely to encounter an error than a 1-TB drive when reading an entire drive, which is what happens during a RAID rebuild.
Given this combination of factors, it is likely that many large storage systems will be in a constant state of rebuild. Clearly the industry is dealing with this reality. We didn't abandon RAID 5 or RAID 6 last year. The most common "solution" has been to just live with the problem. Storage vendors can do this by making sure that there is enough storage controller processing power to provide adequate system performance while the rebuild is occurring. It would not surprise me to see some vendors allocate special standby processors to help with the rebuild process.
Another solution for RAID may be to use flash-based memory for all mission-critical data. While flash modules can fail just like hard drives, the performance of flash makes the rebuild process significantly faster. A rebuild of a flash volume protected by RAID is typically less than 15 minutes in our testing.
Eventually, though, we may just throw RAID out all together and go with an erasure coding algorithm or even more of a mirroring and replication strategy. After all, capacity is now inexpensive, and having a storage system that can automatically maintain x number of copies of data may be the simplest and most practical approach for data that is going to remain on a hard disk. This also gives you greater granularity by being able to set different levels of redundancy for different types or ages of data.
My expectation is that we will see a shift toward flash storage for mission-critical active data where RAID rebuilds will be less time-consuming and space efficiency is more important due to cost. Then we can use more of a replication, redundant copy strategy for older data stored on hard disk.
Track us on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.