Oct 17, 2011
Tying Together Data Centers for App Availability
The drive for zero downtime in critical business applications has prompted some organizations to link together two data centers so that applications in one data center can fail over to the second in case of a problem or outage in the first. The advent of server virtualization technologies such as VM migration makes this option more feasible. Some organizations also go a step further, running the same application simultaneously in two different data centers by building a data center interconnect.
While many architectural decisions are involved in such a deployment, perhaps the most critical is how the two data centers are linked via the DCI. Keeping the application and virtualization software synchronized requires very low latency, often just milliseconds, between the two data centers. This requirement drives the architectural choices available to IT and data center designers when building a DCI.
The applications connected by a DCI will need to use Ethernet, which introduces numerous challenges, including latency issues and the possibility of creating loops that will crash the network. There are several solutions to these challenges, including the use of carrier services such as Virtual Private LAN Service, but these solutions also come with their own limitations.
For instance, while VPLS can be used to prevent loops in the carrier network, it won’t prevent loops from occurring in the customer’s internal network. VPLS may also introduce latency that can disrupt application availability. Customers may also want to employ techniques such as Multichassis Link Aggregation, in which two or more Ethernet switches are logically bound into a single unit to make two Ethernet connections into one connection.
Other options include the use of dark fiber and Dense Wavelength Division Multiplexing (DWDM), both of which offer very fast connections. While both dark fiber and DWDM can be expensive, they also provide the best possible connection for a DCI. (S3451011)