The time it takes to rebuild a RAID-protected volume makes it unwieldy with today's high-capacity drives.
RAID has become a staple of the modern day storage system, but as the number and capacity of drives in a storage system continue to increase, questions have risen about the viability of RAID. At issue is the amount of time it takes for a RAID-protected volume to rebuild itself after a drive failure. While in 2011 we saw many predictions of RAID's demise, it continues to be the protection algorithm of choice for most storage systems. Will 2012 be any different?
In Storage Switzerland's recent article What is RAID? we explain that RAID is a protection scheme that allows for volumes to have a drive failure and still be able to provide access to the data on that volume. The problem is that with today's drive technology the speed at which drives can be rebuilt is now measured in double-digit hours if not days. During this time performance can degrade and there is the risk of additional drive failure. If an additional drive fails beyond the RAID algorithms' allowance, then there is a complete data loss and recovery from backup software must begin.
There is also the reality that drives are more likely to fail as the capacity per drive increases. As drive capacity increases, so does the bit error rate (BER), which is essentially how much data can be read from a drive before you experience an unrecoverable read error. The BER ratio has stayed relatively the same while drive capacities have skyrocketed. A 2-TB drive is significantly more likely to encounter an error than a 1-TB drive when reading an entire drive, which is what happens during a RAID rebuild.
Given this combination of factors, it is likely that many large storage systems will be in a constant state of rebuild. Clearly the industry is dealing with this reality. We didn't abandon RAID 5 or RAID 6 last year. The most common "solution" has been to just live with the problem. Storage vendors can do this by making sure that there is enough storage controller processing power to provide adequate system performance while the rebuild is occurring. It would not surprise me to see some vendors allocate special standby processors to help with the rebuild process.
Another solution for RAID may be to use flash-based memory for all mission-critical data. While flash modules can fail just like hard drives, the performance of flash makes the rebuild process significantly faster. A rebuild of a flash volume protected by RAID is typically less than 15 minutes in our testing.
Eventually, though, we may just throw RAID out all together and go with an erasure coding algorithm or even more of a mirroring and replication strategy. After all, capacity is now inexpensive, and having a storage system that can automatically maintain x number of copies of data may be the simplest and most practical approach for data that is going to remain on a hard disk. This also gives you greater granularity by being able to set different levels of redundancy for different types or ages of data.
My expectation is that we will see a shift toward flash storage for mission-critical active data where RAID rebuilds will be less time-consuming and space efficiency is more important due to cost. Then we can use more of a replication, redundant copy strategy for older data stored on hard disk.
Google in the Enterprise SurveyThere's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity products, and 69 percent cite Google Apps' good or excellent mobility. But progress could still stall: 59 percent of nonusers distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
Join us for a roundup of the top stories on InformationWeek.com for the week of December 14, 2014. Be here for the show and for the incredible Friday Afternoon Conversation that runs beside the program.