Disaster recovery and business continuity are made easier by virtualization, but cost and complexity are holding back users.
When we set out to look at the use of server and desktop virtualization in business continuity and disaster recovery strategies, the last thing we expected was to have to make a case for adopting a BC/DR plan. But of the 681 business technology professionals who responded to our InformationWeek Analytics Business Continuity/Disaster Recovery Survey, 17% have no BC/DR plan, and 20% are still working on one.
Cost and complexity are holding most of them back. "DR is a bear to get people to spend money on and difficult to justify--except right after a data loss," says one respondent.
So how best to overcome the obstacles to continuity planning? First, focus on getting the business to drive the project, enabled by IT. And then, leverage the latest technologies; virtualization, in particular, will boost ROI, cost savings, and resource efficiency levels.
There are all the usual reasons: Virtualization consolidates server infrastructures, thereby cutting power, cooling, floor space, and other costs. It lets you reduce the number of servers you use so that the data center doesn't sprawl out of control. It also provides quick provisioning capabilities so you can respond to project requests faster and meet sporadic utilization spikes.
In addition, hardware is advancing at a faster rate than software designers are able to keep up. As a result, applications underutilize the process and memory support available in most of the equipment out there. Take a quad-core server with four or six sockets. This is more than what a single instance of Exchange 2007 can utilize. Therefore, it's beneficial to virtualize; even if you get only a 2-for-1 ratio, it's still better than overcommitting expensive hardware to applications that can't take advantage.
Virtualization can also be used to solve high-availability issues on the local LAN, minimizing or even eliminating server failure brought on by faulty hardware. Yesterday, when a physical server went south, rebuilding could take two hours at best, more likely four or more. Virtualization can leverage high-availability technologies and put a server back in production in minutes. With the right technology in place, you can even eliminate downtime altogether. Here are three HA options:
• VM restart on another host. Because virtual machines are a collection of files that aren't bound to any specific hardware, if the host fails, it takes only minutes to power on that VM on another server.
• Traditional clustering. You can extend clustering technology to VMs. Setup is often a bit easier, especially when dealing with networking.
• Fault tolerance. When enabled, this technology runs primary and secondary VMs in lockstep. That means every process, task, and operation is executed on both VMs. They operate on separate hosts, so in the event of a primary VM failure, the secondary picks up exactly where the primary failed with no interruption.
Google in the Enterprise SurveyThere's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity products, and 69 percent cite Google Apps' good or excellent mobility. But progress could still stall: 59 percent of nonusers distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
InformationWeek Tech Digest August 03, 2015The networking industry agrees that software-defined networking is the way of the future. So where are all the deployments? We take a look at where SDN is being deployed and what's getting in the way of deployments.