EMC Updates RecoverPoint SAN CDP/Replication Engine
The new 3.0 version of its RecoverPoint CDP/Replication appliance extends the technology EMC acquired with Kashya in 2006. The version provides both local replication to a CDP volume and journal on the same Fibre Channel SAN as the primary storage and remote asynchronous replication via IP to another array at the same time. Unlike array-based replication options, the source and destination arrays need not be the same type, or even from the same vendor.
The new 3.0 version of its RecoverPoint CDP/Replication appliance extends the technology EMC acquired with Kashya in 2006. The version provides both local replication to a CDP volume and journal on the same Fibre Channel SAN as the primary storage and remote asynchronous replication via IP to another array at the same time. Unlike array-based replication options, the source and destination arrays need not be the same type, or even from the same vendor.In a typical RecoverPoint installation, a host agent or intelligent Fibre Channel switch splits all write requests for protected volumes, sending one to the primary disk array and a duplicate to the RecoverPoint Fibre Channel appliance. The appliance then performs the replication and maintains its journals for CDP. With Version 3.0, EMC can also stick the splitter in the controller of a Clariion CX3 array, extending RecoverPoint protection to iSCSI hosts and servers running operating systems, like VMware ESX, that EMC doesn't provide splitter software for. Local replication uses a journal to allow a user to restore or access a volume, or set of volumes, at any point in time. When you want to mount an image you'll find the various restore points annotated with VSS snapshots, SQL Server VDI backup points, and other events that indicate preferred recovery points, typically where a database has been made consistent. While remote replication uses a similar journal structure, users typically reduce the granularity to a snapshot every few minutes.
RecoverPoint uses data compression, de-duplication, and multiple update elimination, sending only the latest change to a block that's been updated several times, to minimize bandwidth usage. You also can set priorities so the recovery group of volumes used by a mission-critical application gets replicated ahead of a less important app when the link gets busy.
The GUI provides an easy-to-use dashboard that not only graphically indicates the state of your system and any hosts that have access to restored volumes or retrospective views of volumes at a previous point in time, but also graphs of bandwidth utilization, time and data lags, and other useful tuning information. It also provides one-click failover to the local or remote replica.
So no need to choose between CDP and remote replication, get both in one package.
About the Author
You May Also Like