All the pieces for a virtualized, cloud-based disaster recovery strategy are in place, but building them into a robust, repeatable, secure, and automated process is still a daunting task.

Kurt Marko, Contributing Editor

June 15, 2012

4 Min Read

Welcome to our latest InformationWeek Business Continuity and Disaster Recovery Tech Center. Over the next couple months, we'll update news and trends in the enterprise BC/DR arena and explore implications of hot and emerging technologies such as virtualization, cloud services, mobility, and mobile devices on this once staid and unexciting corner of IT.

Not too long ago, DR was a rote activity that entailed backing up a rack full of servers to a tape library, changing tapes, carting the fresh backups to a secondary facility across town, and loading the new data onto idle equipment just waiting for a catastrophe. Not only was the process time consuming, it was inefficient and expensive. While gigabit WAN links and data compression and deduplication have largely obviated the need to schlep tapes for anything but long-term archiving, many organizations still use private secondary data centers for emergency operations. Well, virtualized workloads and cloud services are changing the DR calculus, offering the promise of faster, near-real-time failover while eliminating the expensive insurance policy of a backup facility.

As I first wrote in the InformationWeek report on BC/DR and the Cloud, although automatically replicating data to a cloud provider is easy, "backing up applications isn't quite so straightforward, though pairing internal virtualized apps with cloud-based virtual machines allows even small enterprises to achieve seamless recovery to world-class facilities with near-instantaneous failover, without breaking the bank."

The good news is that all the pieces for a "DR 2.0" architecture are in place: easily relocated virtualized applications, high-speed WANs, robust data replication technology, and a growing variety of cloud-based BC/DR services. Better yet, IT is on board with most of the requisite technology. Our surveys show that 89% have incorporated virtualization into their BC/DR plans, with three-quarters using data replication, 72% WAN optimization, and two-thirds remote colocation. The bad news is that while the bits and pieces of a virtualized, cloud-based DR strategy are in place, building them into a robust, repeatable, secure, and automated process is still a daunting task. Our data finds that more than half of survey respondents balk at using the cloud for BC/DR because of real or perceived security risks, while 42% say they can't run some applications in the cloud. Given that dozens, if not hundreds, of large enterprises already run critical database-backed applications in the cloud, including giants like Salesforce.com, it's likely the concern about application compatibility has more to do with architecture and implementation than inherent cloud limitations.

As I point out in the report, building a cloud-based application recovery process around virtualized applications is challenging, and "unlike cloud backup, no canned product is going to automate away all the problems." Technically, that still may be true, however I wouldn't word it that strongly today. As we'll explore in more detail over the coming weeks, VMware, the de facto standard enterprise virtualization platform, along with a few newcomers like VirtualSharp and Zerto, is making significant progress.

VMware vCenter Site Recovery Manager, which received a major upgrade last September, can indeed automate the replication of vSphere guests and shared data, on running VMs, using a Storage Replication Adapter that acts as a middle man between SRM and most popular storage arrays. SRM, which runs within the hypervisor, bundles a number of other handy features, which previously required manual intervention, custom coding, or use of multiple administrative tools, into one interface. That includes things like differential data replication (only copying changed blocks); integration with Microsoft VSS (Volume Shadow Copy Service) to ensure OS and application data consistency across replicas; support for planned migrations or emergency failover; automated recovery with reverse replication and remigration to the primary site; and support for migration policies covering entire multitier, multi-VM applications via so-called VM dependencies.

Obviously, SRM only works within a heterogeneous VMware environment, but the remote site needn't be owned and operated by the enterprise. Although a colocation facility makes a natural choice for a secondary installation, any vCloud service, dedicated or multitenant, would work. But it gets even easier since there are already several providers selling "canned" SRM services, what one shamelessly terms DR-as-a-service, or DRaaS.

The integration of hypervisor-based VM and array-based data replication, with a configuration engine that can link multiple virtual workloads and data sets into a single DR policy, support for per-application RPO and RTO requirements, and automated failover and fallback are the key features of a new generation of automated DR tools built for the VM era. Like so much other virtualization technology, VMware sets the bar quite high, but it doesn't own the market. Although SRM should definitely be on your DR automation short list, we'll explore some alternatives next time.

About the Author(s)

Kurt Marko

Contributing Editor

Kurt Marko is an InformationWeek and Network Computing contributor and IT industry veteran, pursuing his passion for communications after a varied career that has spanned virtually the entire high-tech food chain from chips to systems. Upon graduating from Stanford University with a BS and MS in Electrical Engineering, Kurt spent several years as a semiconductor device physicist, doing process design, modeling and testing. He then joined AT&T Bell Laboratories as a memory chip designer and CAD and simulation developer.Moving to Hewlett-Packard, Kurt started in the laser printer R&D lab doing electrophotography development, for which he earned a patent, but his love of computers eventually led him to join HP’s nascent technical IT group. He spent 15 years as an IT engineer and was a lead architect for several enterprisewide infrastructure projects at HP, including the Windows domain infrastructure, remote access service, Exchange e-mail infrastructure and managed Web services.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights