Bluelock Recovery-as-a-Service allows midsized companies to avoid physical duplicates and store virtual machines at a distant location.
8 Great Cloud Storage Services
(click image for larger view and for slideshow)
eMeter is a 200-employee unit of Siemens that produces software for electric, natural gas and water utility management systems. Its EnergyIP platform is meant to allow a utility to combine information from smart meters with information on the grid's operation to better serve customers.
With part of its development in India and part in Redwood City, Calif., eMeter turned to Bluelock cloud data centers in Indianapolis and Las Vegas to store the recovery copies of its systems. Under a Bluelock service launched in early May, virtualized copies of first-tier production systems were created and stored in a Bluelock data center, with a constant data feed from production systems linked to the same data center.
In the event of a disaster, the sleeping virtual machines would be woken up and data fed into them reflecting the last known point of data integrity.
That became necessary, recalled Pat O'Day, CTO of Bluelock, when an eMeter customer in India botched its attempt to install a needed energy management system last month. eMeter senior systems administrator Bryan Bond got a call in the middle of the night asking for help in getting it back up. He did so in three hours by activating the backup copy in a Bluelock cloud center and resurrecting its data feed, a task that would have taken four to five business days under the prior recovery system, O'Day recounted.
"A lot of midsized companies in the past didn't have access to this type of recovery system, due to cost," said O'Day in an interview. Bluelock's Recovery-as-a-Service (RaaS) relies on keeping an up-to-date copy of complete system files in the cloud, and replicating data at the hypervisor level.
The Bluelock recovery system places an agent on the primary system that watches the I/O stream to storage. When a change has been made in the configuration or any files of the primary system, that information is captured and sent to the recovery system. Likewise, as data is worked on by the system, a duplicate copy in the cloud is kept in sync. If the customer wants a guarantee of complete data recovery, the customer must invest in a high-speed network link that can transmit large amounts of data as a shutdown of a primary system approaches.
Usually customers settle for some last-second loss of data in exchange for the ability to restore data integrity up to the approximate point of failure. Attempting to maintain real-time failover to a backup system at a distant site incurs steep costs. Bluelock RaaS is meant to be an alternative to that.
O'Day said companies used to come to Bluelock with a pilot workload and secondary production systems to run as get-acquainted exercises. Now new customers are getting acquainted by using the cloud as a recovery mechanism for a running production system. As they learn how it can function in this way, they gain confidence in their ability to use infrastructure-as-a-service for other purposes.
"At least half of our new business is coming to us" through the recently launched Recovery-as-a-Service, he said.
"Cloud services are not an experimental option. They're seen as an alternative to purchasing your own assets," he said.
Such a claim could be expected from a cloud services provider. Bluelock remains a small one compared to Amazon Web Services, Google or Microsoft. It closed its data center space in Salt Lake City after expanding space in Las Vegas. That leaves customers that are, let's say, in the path of Hurricane Sandy on the East Coast with basically one place to link to -- Las Vegas. It's not unheard of for a winter storm that shuts down the East Coast to also severely impact Indianapolis, Bluelock's headquarters and initial data center location.
Las Vegas is a long ways from Chapel Hill, N.C., Washington, D.C., or Boston, conceded O'Day. But customers can are better off having a greater latency in their backup system than having no backup system at all. In addition, they can engineer the data feed with as much communications line capacity as they care to purchase. "They can be as real time as they care to afford," he noted.
Journaling events at the recovery site allows data to be recovered for a period of up to two days prior to the occurrence of a failure, ensuring a deep enough record to restore data integrity at a recovery point before the failure, O'Day noted.
In addition to hurricanes, there are other reasons to keep a backup system readily available at a distant site. Sometimes it's an intruder who's gotten into the system and IT must turn to a clean copy and recovery point where it knows the data was unaffected. Other times it's not a natural disaster but a manmade one -- a failed attempt to upgrade a production system that leads to the need for a rollback and reactivation of a sound copy.
O'Day's point is that virtualization and cloud computing have turned what used to be a difficult proposition -- keeping complete duplicate systems on hand for disaster recovery -- into a much more practical one. His firm has pioneered a service, primarily for midsized companies that use VMware but don't have big IT staffs or budgets.
"Midsized companies have enterprise IT requirements but don't have enterprise IT staffs. [RaaS] adds a critical capability to companies that wouldn't have been able to do it before," he said.
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
InformationWeek Must Reads Oct. 21, 2014InformationWeek's new Must Reads is a compendium of our best recent coverage of digital strategy. Learn why you should learn to embrace DevOps, how to avoid roadblocks for digital projects, what the five steps to API management are, and more.