How To Recover Your Data In Minutes
By George Crump
InformationWeek
For many legacy backup processes, recovering data for mission-critical large applications in less than a day is enough of a challenge. Zero downtime or minutes of downtime implies that there is no recovery because the recovery itself will take more than zero and thereby invalidate the standard. How do you create a zero downtime environment without breaking the bank on sophisticated clustering technologies?
First, my guess would be that most of the respondents, even those that answered zero, would be willing to live with a few minutes of downtime and when most people answer zero that's what they mean. Even a few minutes of downtime, though, with legacy backup technologies is going to be difficult to achieve. There are, however, new approaches that will allow even small to midsize businesses to be able to return applications to their full and ready state within a few minutes.
The first key ingredient to achieve data recovery in less than a few minutes is that the movement of data from a backup device to a source device simply can't occur. No matter how fast the network connection, the time required to make a copy will in almost every case break the goal of only a few minutes of downtime.
This means that the backup application must be able to present the data to the source server in a native state so that the application that it hosts can directly access the data. This also more than likely means that the backup device that stores the backup data needs to be a disk-based system.
Even being able to directly host the data to the application may still not be fast enough to meet the objective of zero downtime or few minutes of downtime. This will be especially true if the physical server that was hosting the application has failed. A physical server failure would mean that a standby system needs to be put in place or, more than likely, ordered so that there is something to actually access the data while it's in place.
Some vendors have overcome this problem by incorporating into their applications the ability to spawn virtual machines and recover the failed host plus its data into that virtual machine, all without moving data. This capability brings the concept of zero downtime or minutes of downtime to the masses. No longer do you need to go out and buy a sophisticated cluster to achieve those goals.
The other advantage of this technique is that the testing of a failed server can now be as easy as the click of a button. The ability to start an application with its data in a test mode almost instantly allows an organization to become very confident in their ability to recover.
Follow Storage Switzerland on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement. George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.
Federal agencies must eliminate 800 data centers over the next five years. Find how they plan to do it in the new all-digital issue of InformationWeek Government. Download it now (registration required).
| To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. |
Virtual Infrastructure Reports
Informed CIO: VDI Snake Oil Check
You won't lose your shirt on a desktop virtualization initiative, but don't expect it to be simple to build or free of complications. This report examines the three biggest problems when developing a business case for VDI: storage costs, ongoing licensing, and the wisdom of prolonging the investment in PC infrastructure.
Fundamentals: Next-Generation VM Security
Server virtualization creates new security threats while turning the hypervisor into a network black hole, hiding traffic from traditional hardware defenses -- problems a new breed of virtualization-aware security software tackles head-on.
Delegation Delivers Virtualization Savings
IT can't-and shouldn't-maintain absolute control over highly virtualized infrastructures. Instituting a smart role-based control strategy to decentralize management can empower business units to prioritize their own data assets while freeing IT to focus on the next big project.
The Zen of Virtual Maintenance
Server virtualization has many advantages, but it can also lead to chaos. Many organizations have unused or test VMs running on production systems that consume memory, disk and power. This means critical resources may not be available in an emergency: say, when VMs on a failed machine try to move to another server. This can contribute to unplanned downtime and raise maintenance costs. Easy deployment also means business units may come knocking with more demands for applications and services. This report offers five steps to help IT get a handle on their virtual infrastructure.
Pervasive Virtualization: Time to Expand the Paradigm
Extending core virtualization concepts to storage, networking, I/O and application delivery is changing the face of the modern data center. In this Fundamentals report, we'll discuss all these areas in the context of four main precepts of virtualization.
Virtually Protected: Key Steps To Safeguarding Your VM Disk Files
We provide best practices for backing up VM disk files and building a resilient infrastructure that can tolerate hardware and software failures. After all, what's the point of constructing a virtualized infrastructure without a plan to keep systems up and running in case of a glitch--or outright disaster.



Subscribe to RSS