Data Recovery Needs To Change

Current procedures for data recovery take too long, and enterprises should consider new approaches to restore their applications more quickly.

George Crump, President, Storage Switzerland

November 10, 2011

3 Min Read

Most IT professionals have heard the line: "Backup is not about backing up, it is about recovery." Usually those words are spoken by marketing execs at backup software companies. And it is a true statement. But for data recovery to work, it has to be about more than successfully copying data back to its original location. True data recovery has to bring the application back online quickly and it has to work reliably.

Conventional, old-school recovery efforts have typically meant replacing a failed hard drive or server and then copying the data from the backup device to the new device. Frequently, when a server or hard drive fails it means a replacement part has to be ordered. That takes time.

When the new part is delivered, it has to be installed. That takes time. If the failed part is a server, it may mean that the operating system needs to be reinstalled and maybe even the application. That also takes time. If the failed part was a hard drive, it may mean you suffered a multi-drive failure on an array and that those drives need to be re-installed. The array may also need to be reconfigured. Worse, the array could have come from a shared storage system so now that failure actually impacted multiple servers and their applications. More time.

Once the new part or device is ordered, delivered, installed and configured, you can finally start moving data back to the new system. This always takes a lot longer than you think for a few reasons. First, writes are always slower than reads. And RAID has to calculate a parity bit (two in the case of RAID 6) and the data has to physically move across the network. Once again, this all takes time.

As we discussed in our recent article "Recovery is about TIME," the way to bring applications back online faster is to change the recovery process to eliminate all of these bottlenecks. First, modern day backup systems should have the ability to recover in place. This means they need to be able to present the data to the application directly from the backup device. This saves the time required to order and re-configure a failed array or drive and to copy all the data back over the network. All you need is an extra server to point to the data if the original server has failed.

The problem is that you may not have an extra server or it may take time to configure an extra server so that it can run the application. Server virtualization is an ideal capability to leverage here. The backup software should have the ability to spin up a virtual machine and run the application directly from the backup device--basically a virtual recovery option.

As we will detail in our upcoming webcast "Will your Backup Plan Answer the Recovery Call?," there are several ways to implement virtual recovery and it does not mean that you must have a virtual infrastructure in place. For example, if a backup appliance is used it can potentially serve as a standby virtual environment that can host the application until the original system can be replaced.

The value of bringing an application online in a virtual manner is to decrease the time and effort involved in getting back to business. It also allows for the replacement of the fallen server or drive to be handled without people screaming that they can't do their job until the system is fixed, which should reduce errors. As your environment grows or as it evolves, a critical capability to look for in your next backup system is the ability to recover in place.

Follow Storage Switzerland on Twitter George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.

Read more about:

20112011

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights