Not so long ago, InformationWeek surveys showed that many companies' disaster recovery plans were largely incomplete and unproven. For example, among 420 respondents to our 2011 Backup Technologies Survey, just 38% tested their restoration processes at least once a year for most applications. Only half backed up all their virtual servers every week.
Since then, things have improved, particularly the technology. This shift has come about because the applications that IT fields are increasingly central to business operations, and downtime means serious money lost. That translates into budget for business continuity and disaster recovery programs. Eighty percent of respondents to our new InformationWeek 2013 State of Storage Survey have strategies in place, and half of them test regularly.
The next step is to automate the process of failing over to a warm backup site -- one where hardware is up and running and data is regularly replicated from the production site. Removing people from the equation streamlines the process and lessens the possibility of error and costly delay.
We realize that many IT pros who priced an automation project just a few years ago came away with sticker shock. Between replication software, running systems in warm sites and bandwidth costs, bringing recovery times down from days to minutes costs more than most companies could justify. Implementing an automated recovery plan still isn't inexpensive, but prices have come down enough that, with some new technologies and careful engineering, we can often bring recovery times down to minutes for a reasonable price; we discuss some of these in our report on BC/DR and the cloud.
But while tech advances have handed IT pros a plethora of new tools to streamline failover to a warm site, complexities remain. Three areas in particular can derail automated disaster recovery: not having complete data sets in place for critical applications, a lack of bandwidth and incomplete integration.
Getting an application up quickly at a warm site means that its data must be there, ready and waiting. In manual failover scenarios, data can be a little dated: A stakeholder decides on reasonable RPO and RTO (recovery point and recovery time objective) metrics and agrees that some data might be lost. But for automated recovery of applications to work, completeness and integrity of the data at the recovery site are critical.
Primary array replication is the best way to mirror data from one site to another without human involvement. However, the licensing and storage costs associated with replication have tabled many a failover project. In the last couple of years, we've seen a number of changes: The commoditization of enterprise storage, the emergence of upstart providers of appliances and software, and the introduction of managed replication services have dramatically driven down the cost, regardless of the platform or technique used. In fact, our State of Storage report shows the percentage of respondents using replication on a widespread or limited basis ticked up three points since last year, to 70%.
But replicating data to a warm site still requires bandwidth, and plenty of it, which brings us to our second roadblock.
Google in the Enterprise SurveyThere's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity products, and 69 percent cite Google Apps' good or excellent mobility. But progress could still stall: 59 percent of nonusers distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
InformationWeek Tech Digest, Nov. 10, 2014Just 30% of respondents to our new survey say their companies are very or extremely effective at identifying critical data and analyzing it to make decisions, down from 42% in 2013. What gives?