Current procedures for data recovery take too long, and enterprises should consider new approaches to restore their applications more quickly.
Most IT professionals have heard the line: "Backup is not about backing up, it is about recovery." Usually those words are spoken by marketing execs at backup software companies. And it is a true statement. But for data recovery to work, it has to be about more than successfully copying data back to its original location. True data recovery has to bring the application back online quickly and it has to work reliably.
Conventional, old-school recovery efforts have typically meant replacing a failed hard drive or server and then copying the data from the backup device to the new device. Frequently, when a server or hard drive fails it means a replacement part has to be ordered. That takes time.
When the new part is delivered, it has to be installed. That takes time. If the failed part is a server, it may mean that the operating system needs to be reinstalled and maybe even the application. That also takes time. If the failed part was a hard drive, it may mean you suffered a multi-drive failure on an array and that those drives need to be re-installed. The array may also need to be reconfigured. Worse, the array could have come from a shared storage system so now that failure actually impacted multiple servers and their applications. More time.
Once the new part or device is ordered, delivered, installed and configured, you can finally start moving data back to the new system. This always takes a lot longer than you think for a few reasons. First, writes are always slower than reads. And RAID has to calculate a parity bit (two in the case of RAID 6) and the data has to physically move across the network. Once again, this all takes time.
As we discussed in our recent article "Recovery is about TIME," the way to bring applications back online faster is to change the recovery process to eliminate all of these bottlenecks. First, modern day backup systems should have the ability to recover in place. This means they need to be able to present the data to the application directly from the backup device. This saves the time required to order and re-configure a failed array or drive and to copy all the data back over the network. All you need is an extra server to point to the data if the original server has failed.
The problem is that you may not have an extra server or it may take time to configure an extra server so that it can run the application. Server virtualization is an ideal capability to leverage here. The backup software should have the ability to spin up a virtual machine and run the application directly from the backup device--basically a virtual recovery option.
As we will detail in our upcoming webcast "Will your Backup Plan Answer the Recovery Call?," there are several ways to implement virtual recovery and it does not mean that you must have a virtual infrastructure in place. For example, if a backup appliance is used it can potentially serve as a standby virtual environment that can host the application until the original system can be replaced.
The value of bringing an application online in a virtual manner is to decrease the time and effort involved in getting back to business. It also allows for the replacement of the fallen server or drive to be handled without people screaming that they can't do their job until the system is fixed, which should reduce errors. As your environment grows or as it evolves, a critical capability to look for in your next backup system is the ability to recover in place.
Google in the Enterprise SurveyThere's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity products, and 69 percent cite Google Apps' good or excellent mobility. But progress could still stall: 59 percent of nonusers distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
InformationWeek Tech Digest, Nov. 10, 2014Just 30% of respondents to our new survey say their companies are very or extremely effective at identifying critical data and analyzing it to make decisions, down from 42% in 2013. What gives?