For a technologist, one of the many beauties of virtualization is the decoupling of server OS from server hardware, an abstraction that enables treating a full, running OS like just so many bits that can be moved at will from system to system, as popularized by vMotion. The problem, though, is that applications are more than just the sum of processes running on a guest OS. They're also data--system configurations, application state, persistent data repositories--and for all but the most trivial applications, virtual machine migration works only when each physical system can access the same common storage pool. Of course, this isn't a limitation inside an enterprise data center, where networked storage arrays are the norm. But it does mean that moving virtualized applications between data centers, whether to other private facilities or to cloud services, requires careful attention to data replication.
Even if your DR arsenal includes, as we discussed last time, software that can automatically perform all the magic needed to re-create a complex multitier application at another facility (for example, instantiate new VMs, configure multiple dependent servers, and validate application performance against a set of test criteria), you still need to ensure that application data is consistent across locations. Depending on the DR recovery time objective, this can mean doing anything from scheduled backup snapshots to continuous data replication, tasks that have traditionally been the domain of sophisticated array hardware running special-purpose data protection software, where an array in the primary location mirrors information to the secondary site. If you want the secondary to be in the cloud, you're in for even more work bringing up a cloud storage gateway, since few cloud services externally expose standard network storage protocols like NFS, CIFS, or iSCSI.
Data Replication, Meet The Hypervisor
None of these complexities quite fits with our quest for a simple, convenient, and, yes, cheap cloud-based DR-as-a-service, or DRaaS. It would be so much easier if we could just add a virtual appliance that could automatically replicate application data and VM images along with configurations and dependencies, from whatever type of storage we happen to be using, whether a high-end Fibre Channel SAN or pedestrian NFS NAS, to a cloud service where it could all be reconstituted.
Well, that's exactly the goal that inspired Zerto to create its VMworld award-winning product.
But DR for virtualized environments shouldn't just match capabilities of purely physical deployments. No, according to Zerto co-founder and CEO Ziv Kedem, it should be much better, more reliable, cheaper, more repeatable DR. Kedem also understands that there's more to DR than just remotely copying data. "There's no business value in replicating data," he says. "Obviously you need the data, but people want the application." To that end, Zerto introduced what it says is the first hypervisor-based replication technology that takes advantage of virtualization's abstraction layer by moving replication from the realm of the physical--disk arrays and storage appliances--to the virtual.
DR Is More Than Just Replication
Zerto's product has two components: a management and control console that integrates into vCenter (and yes, this is a VMware-only product for now) and the actual replication software appliance that runs on each physical host. Data replication is continuous, not granular snapshots, and yet Kedem says there's "zero impact on application performance."
Furthermore, running the replication engine on the hypervisor means it's nonintrusive ("We don't disrupt anything," says Kedem) and works with whatever storage systems you're already using. On the management side, since Zerto's software plugs into vCenter, it has full and constant visibility into all the guests, virtual networks, and interfaces on each physical host, and can thus incorporate new systems as they're added and track VMs should vMotion move things around.
To handle DR for multitier applications, Zerto's software features something it calls Virtual Protection Groups, essentially collections of VMs, their related resources (virtual disks, LANs, interfaces), and dependencies that must operate and be restored as a consistent unit. The beauty of hypervisor-based replication is that these complex application bundles need not all use the same shared storage pool or be replicated as a unit. For example, if the front-end Web servers don't store any persistent user data and their code hasn't changed, there's no need to replicate their disk images. As with other DR automation products, like VirtualSharp, Zerto is scriptable, and IT can test and validate application performance of the replicas. Says Kedem, "We want to make testing as easy as pushing a button."
Put It Together And You Get DRaaS
Although Zerto's initial offering is aimed at larger enterprises with two or more data centers, Kedem says that SMBs, many of which don't have alternate sites, are an untapped market for sophisticated yet easy-to-use DR products. The company is targeting organizations that are too small to afford a DR site but too big to risk the kind of damage an extended outage could inflict. For them, cloud services are a perfect fit, and Zerto is working on a follow-up product designed for service providers. Terremark was an early adopter but adapted Zerto's initial enterprise-focused product as part of its cloud-based DR service. Zerto's update, which is in late beta with a public launch later this summer, is squarely aimed at being the foundation for public cloud, DRaaS offerings, with about 20 providers already in various stages of testing, according to Kedem.