Your big data disaster recovery strategy must consider size, cost, and the unpredictable, but inevitable, need to access old data.
Protecting a big data archive is different than protecting big data analytics--or it should be. While both types of big data are, well big, big data archives typically have the larger capacity. Full backups are not an option (too much data), and a disk-only backup strategy may not be realistic (too expensive). Big data archives must have data protection embedded into the architecture, as the data sets are too large to run a separate process.
As we discussed in a recent column big data archive environments are storing dozens of petabytes of content, often video, audio, or images. That content can sit idle for years and then a portion of it becomes active, triggered by some event, for a period of time. A common example: When celebrities die or get into trouble, video and photos from their past (that were infrequently accessed) become heavily requested. The size of these environments makes backup almost impossible, but the likelihood of accessing that data makes protection a top requirement.
The challenge presented by the data set's size and access pattern is such that creating a disk-only archive store may not be practical. The challenge is not one of capacity, as we discuss in our article "Is Your File Server Choking." Object-based storage from cloud infrastructure suppliers has eliminated the key scaling issues facing traditional file systems, adding near-infinite capacity and dealing with trillions of files. The challenge is the cost associated with providing a near-infinite capacity, disk-based storage system and backing it up via replication to another disk-based, near-infinite capacity system.
As a result, tape has a role, and potentially a prominent one, in big data archives. Archive file systems, like those offered by members of the Active Archive Alliance, have the capability to support tape as an integral part of the environment. What this means is that a large percentage of the active data sets can still be kept on disk for instant access, but can also be copied to a tape or two upon creation. The second tape can be used for backup and disaster recovery. This means "backup" happens as the data is created or modified, not as a single, separate, nightly process. As the disaster-recovery tape fills, it can be moved to a secure offsite location. It also means that tape can be the only location of that data, eventually scrubbing the data from disk.
This is not an "abandon disk" recommendation; it is a recommendation to be realistic. We suggest that you put as much data on disk as makes sense and can be afforded. Disk is needed for the data that will become hot, driven by the next world event. It is also needed for recently created data, since that is the most likely to be accessed.
Tape is needed to back up all the information and make sure the petabytes of old information that won't be accessed is cost-effectively stored. As we discuss in our article "Comparing LTO-6 to Scale Out Storage for Long-Term Retention," linear tape open (LTO) now has transportability, thanks to linear tape file system (LTFS), and LTO-6 brings massive capacity for pennies a GB.
When we are dealing with petabytes of data, moving it all at once--across even the best of networks--is not an option. Protecting these large data sets as they are being created or modified, and using the right combination of disk and tape, is going to allow these archives to store all the information required to make them useful for years to come.
InformationWeek is conducting a survey to determine where enterprises stand on their IPv6 deployments, with a focus on security, training, budget, and readiness. Upon completion of our survey, you will be eligible to enter a drawing to receive a 16-GB Apple iPad. Take our D-Day for IPv6 Survey now. Survey ends May 11.
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.