Cost and complexity are holding most of them back. "DR is a bear to get people to spend money on and difficult to justify--except right after a data loss," says one respondent.
So how best to overcome the obstacles to continuity planning? First, focus on getting the business to drive the project, enabled by IT. And then, leverage the latest technologies; virtualization, in particular, will boost ROI, cost savings, and resource efficiency levels.
There are all the usual reasons: Virtualization consolidates server infrastructures, thereby cutting power, cooling, floor space, and other costs. It lets you reduce the number of servers you use so that the data center doesn't sprawl out of control. It also provides quick provisioning capabilities so you can respond to project requests faster and meet sporadic utilization spikes.
In addition, hardware is advancing at a faster rate than software designers are able to keep up. As a result, applications underutilize the process and memory support available in most of the equipment out there. Take a quad-core server with four or six sockets. This is more than what a single instance of Exchange 2007 can utilize. Therefore, it's beneficial to virtualize; even if you get only a 2-for-1 ratio, it's still better than overcommitting expensive hardware to applications that can't take advantage.
Virtualization can also be used to solve high-availability issues on the local LAN, minimizing or even eliminating server failure brought on by faulty hardware. Yesterday, when a physical server went south, rebuilding could take two hours at best, more likely four or more. Virtualization can leverage high-availability technologies and put a server back in production in minutes. With the right technology in place, you can even eliminate downtime altogether. Here are three HA options:
• VM restart on another host. Because virtual machines are a collection of files that aren't bound to any specific hardware, if the host fails, it takes only minutes to power on that VM on another server.
• Traditional clustering. You can extend clustering technology to VMs. Setup is often a bit easier, especially when dealing with networking.
• Fault tolerance. When enabled, this technology runs primary and secondary VMs in lockstep. That means every process, task, and operation is executed on both VMs. They operate on separate hosts, so in the event of a primary VM failure, the secondary picks up exactly where the primary failed with no interruption.
Same As It Ever Was
Another response of concern in our survey: 21% of respondents say they'll maintain 80% of their BC/DR systems in physical form, with 20% of the environment virtualized. An additional 34% say they won't leverage virtualization in BC/DR, period.
So apparently more than half of respondents are still building dedicated BC/DR facilities stocked with the same hardware as in production data centers--doing regular backups, making clone images, sending data off site, and (in theory) keeping a fairly accurate runbook of how we'll recover. But as we buy new production servers, the old images and backups stop working well, and runbooks inevitably fall out of date.
"It's not a big secret that DR/BC is a pain in the ass," says one respondent. "You can try to co-locate, but then you essentially have to double up all of your equipment. Virtualization makes full co-location more palatable, ... but there's still a significant cash outlay involved." The best approach is to create a plan where both locations contribute to your production environment, the respondent says.
Another benefit of virtualization in BC/DR is portability: The ability to literally back up your entire environment and carry it with you is revolutionary.
The fact that VMs are regular files means you can back up the entire state of the system. This is reason enough to use virtualization in BC/DR, even if you totally ignore all the other benefits. Couple that with the ability to easily extend your network, test your BC/DR plans, and support more applications, and you end up with a strategy that saves money and lets you leverage the investment on a day-to-day basis.
When we asked respondents with less than 80% BC/DR virtualization their reasons for holding back, nearly half, 45%, say some vendors don't support their apps on a VM. Meanwhile, 39% say they want to mimic exactly the same setups they have in the production environment, while 27% say it would be too costly to virtualize their BC/DR sites, and 20% lack expertise.
Frankly, we find these responses amazing. To those looking to mimic the production environment, there's no reason why you aren't already virtualizing in production. BC/DR initiatives are a great opportunity to get exposure to the technology, and you can do it with older servers.
For those who say some vendors don't officially supported their apps on VMs, we have one question: How many of you know your critical applications better than the vendor's tech support does? We can understand the argument against it in production environments--you want to make sure critical apps are supported, no questions asked. But your BC/DR setup is a great place to test the application in a virtualized infrastructure.
We have yet to come across an application that absolutely can't be virtualized. For maybe 2% or 3%, the effort isn't worthwhile, but in general, vendors are simply trying to avoid having to support virtualized instances.
And for those who say it would be too costly to virtualize, if you dedicate Tier 1 shared storage, top-of-the-line equipment and network devices, and enterprise-class virtualization software to your project, it will cost a significant amount of money. But it's likely still less expensive than mimicking what you have in production.
If you budget and design properly, you'll likely conclude that the best way to maintain a BC/DR facility is to make it an extension of the data center when everything is fine and a BC/DR site when an outage occurs. Think about it: Instead of spending money to build out only your production data center, why not budget to build out your production and BC/DR co-locations simultaneously. If a central data center were to cost you $1 million to build, for $1.3 million you could get a BC/DR facility as well that could also off-load some operations, say for peak times or maintenance.
By making your virtualized BC/DR site an extension of your data center, you can also use your existing virtualization management products, for example, to control the process of moving copies of apps with sensitive data around to different locations, a must if your organization is subject to compliance audits.
We were pleasantly surprised to see that 52% of respondents leveraging virtualization would consider using their BC/DR equipment for testing, development, and production. However, 20% limit that to just testing and development, while 22% say that BC/DR equipment should be on standby for any interruption in business and should have no other purpose.
To those who want to keep the equipment in standby, we ask: What's the downside of taking advantage? Again, the deciding factor here is virtualization--it's easier to use this equipment if you're virtualizing as part of your BC/DR strategy because you can have VMs that are intended for BC/DR available and powered down while you're using the hardware for less mission-critical functions. In the event of an outage, you power down the unnecessary VMs and power up the necessary ones.
Now, we understand that old habits die hard, but it pays for CIOs to do what it takes to increase their organization's comfort with using virtualization. It has changed the landscape for good, and IT must adapt. Instead of being dragged along, make the most of this paradigm-shifting technology to solve what's been a problem for decades. IT groups that employ virtualization in a smart BC/DR setup can rest assured that they'll be ready for any disaster.
Elias Khnaser is practice manager for virtualization and cloud computing at Artemis Technology, an integrator focused on aligning business and IT.