Cloud backup providers need to face the realities of Internet bandwidth and give customers a way to perform bulk transfer of data for both backup and recovery.

George Crump, President, Storage Switzerland

September 12, 2013

3 Min Read

In my last column, we discussed some of the limitations of cloud-based backup and why more cloud providers should provide some form of external, portable storage to overcome those challenges. I used tape technology as an example. The second and potentially larger problem with tapeless cloud backup is the recovery process. Vendors' claims about the lack of tape value are getting out of hand.

If you lose an entire server, the data deduplication that helps you in backup is not going to save you from an extended period of downtime while this data trickles through the Internet. There is no baseline to compare it to and, even if there were, most software solutions can't do a deduplicated recovery.

In almost every case, having the provider create a tape and overnight it to you would be faster. Incidentally, if they gave you the ability to create your own local tape as I described in my last column, you could go get the data yourself from your own storage. Basically, you could have a backup to your backup. In a disaster that is a good thing.

Cloud backup vendors claim to have all the answers, this time in the form of in-cloud recovery. This essentially is the capability to recover an entire server at the provider's location, on their servers. I actually had one provider state, "Now you don't need tape, who cares if it takes 30 days to replicate back to your data center?" Well I do, for one, and I bet you do, too. Or at least you should.

First, most cloud-recovery options provide absolutely no guarantee of performance while they are hosting your application. You are supposed to just be happy it is running, even if performance is so bad you can't use the application. How dare you ask about how that app will perform and if they have any guarantees around that?

Second, most cloud providers are incapable of helping you with the various network re-routing issues that will certainly have to happen. After all, the application is not in your data center, so how are you going to get your users to it?

Third, you probably want that application back in your data center for a reason, or you would just have hosted it in the cloud in the first place. The top reason is probably performance, and I bet security would be another. In other words you want your application recovered in your data center as soon as possible, not 30 days.

I'm not against cloud recovery; it is great technology. But you only want to use it when you have to and typically for as short a time as possible. But the reality is that if you need to send an entire server data set, or multiple servers in the case of a disaster, it is going to take a while to do so across the Internet.

These technologies are best when used together. For example, if you had a server failure, you could start the application in the cloud using cloud recovery while the provider put that server on tape and overnighted it to you. Then, if their appliances were tape-enabled, you could load the bulk of the data and do a quick changed-block recovery of data that has changed since the data was sent to you. This would however, require a changed-block recovery capability that only a few providers have.

Cloud backup is a very valuable addition to most data protection strategies, and it's no longer limited to small businesses. But cloud backup providers need to face the realities of Internet bandwidth and provide their customers with a way to perform bulk transfer of data for both backup and recovery.

Learn more about cloud backup by attending the Interop conference track on Cloud Computing and Virtualization in New York from Sept. 30 to Oct. 4.

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights