Earlier this week, over on InformationWeek's sister site that's dedicated to Cloud Computing -- Plug into the Cloud -- George Crump asked a question that I thought I knew the answer to: Is data in the cloud risky? Crump points to a "
Earlier this week, over on InformationWeek's sister site that's dedicated to Cloud Computing -- Plug into the Cloud -- George Crump asked a question that I thought I knew the answer to: Is data in the cloud risky? Crump points to a "recent report that the FTC is considering a request to shut down Google Apps." But after I got done laughing at such a waste of taxpayer money, I turned back to the seriousness of the question at hand.The cloud is infallible, right? Although Crump doesn't answer the question, he implies that the same practices that work for protecting on-premises IT can serve you well should you decide to move some amount of data into the cloud. Writes Crump:
...companies like Iron Mountain, Nirvanix and Bycast have the ability to have a mixed deployment model with local appliances to cache data before being transfered to the cloud storage data center.
But is data in the cloud at risk? Must you turn to third parties to guard against a catastrophe or do the various cloud providers have you covered out of the box?
I never stopped to give the question serious thought until last week when I had the opportunity to chair a panel on cloud computing at Sun's CommunityOne East Conference in New York City. Sun Cloud CTO Lew Tucker had a reserved parking spot on the panel. But the rest of the guests -- Google App Engine product manager Pete Koomen, Rightscale CEO and founder Michael Crandell, and Salesforce vice president of developer marketing Adam Gross -- were my picks. To hear the recording of that conversation in its entirety, you can press the miniature play button here to listen to the podcast.
The dialog on the cloud's reliability really got going around the 42:22 mark where Michael Crandell had said the following:
Any public service, any cloud-oriented service is extremely publicly exposed so the slightest glitch makes headlines and we've all seen some of those recently. But if you talk to anyone who has run any size of datacenter internally, they're not judgmental at all because they've had bigger outages. And more severe outages...
Bigger? Well, to be clear, it's all relative. When millions of users lose access to their Gmail account because Gmail is flaking out for some reason, that's a petty big outage that would be tough to beat. But relatively speaking, I think the point that Crandell is making is that for each organization that has experienced on-premises email outages, that those outages were "bigger" to that organization than any of Gmail's outages would have been (to that organization). Crandell went on:
... Cloud services actually provide better redundancy, better failover You have to be quite a large company with a lot of investment in multiple datacenters over different geographic areas to achieve the same kind of reliability that you can get today at 10 cents an hour.
This of course is the benefit of cloud computing. No matter what size the customers are, they all get to benefit from the cloud provider's investment in scale which, by the very nature of the business the cloud provider is in, must dwarf the investment that all but the very largest companies are prepared to make in their datacenters. But scale and reliability are not necessarily the same thing. Later, at 53:05, Crandell said:
Instances on Amazon, when they terminate for whatever reason -- you terminate them intentionally or there's a hardware fault -- the local storage on that instance is gone and that's why of course they also came the storage model around S3 and Elastic Block Store and Sun is doing the same thing. So, it is up to you to make sure that your model of storing data keeps your data safe.
Here, Crandell is referring to the virtual nature of Amazon's Machine Images (AMI, which coincidentally happens to stand for Amazon Malaria Initiative as well) used in Amazon's Elastic Compute Cloud. One of the differences of working with virtual machines when compared to real bare metal (physical machines) is that you must deliberately save or preserve any work or data that may have been added, changed, or deleted between the time the virtual machine was started and the time it was "ended." Some virtual machine solutions do this automatically for you. But, in the case of Amazon's EC2, Crandell is basically saying that if the hardware in Amazon's datacenter that your virtual machine happens to be relying on fails in some way, then whatever data was in that image at the time of failure will essentially fail too. There's no opportunity to preserve it.
This naturally begs the question "What are the odds that my machine image will be running in some virtual machine on a piece of hardware in Amazon's datacenter that decides to ABEND itself?" Off microphone, I asked Crandell if such hardware failures within Amazon's EC2 infrastructure ever occur. He would know. His company Rightscale has built an entire business out of making Amazon's EC2 work better for customers than it does "out of the box." According to Crandell. They do occur. Not often. But they do in fact occur and unless the owner of the image has taken certain precautions to protect the data, such failures take the data with them.
I contacted Amazon to see what it had to say about the matter and Amazon confirmed the scenario that Crandell describes. But, as it turns out, Amazon itself offers one of the countermeasures that can be taken to ensure that should an image fail, that the data that image was working with is preserved. According to Amazon spokesperson Kay Kinton:
Amazon EC2 offers many options for users to store their data.The behavior that you describe is true for the instance stores that come with an instance. The instance stores can be thought of as "local disks" on the instance, and if the instance is terminated or fails, the data on that instance store will be lost. These instance stores are useful for many applications.
Amazon EC2 added the Amazon Elastic Block Store (Amazon EBS) last August to give customers a reliable disk option for their Amazon EC2 instances. With Amazon EBS, developers are able to provision reliable volumes and attach them to their instances. These look to the instance like standard block devices (i.e. hard drives) and customers can use as many as they need to store and persist their data. Amazon EBS volumes are automatically replicated within a single Availability Zone to protect users from failure of independent components. Additionally, with Amazon EBS, users also have the ability to easily take point in time snapshots of EBS volumes to Amazon S3 for long term durability.
...you only pay for what you use. Volume storage is charged by the amount you allocate until you release it, and is priced at a rate of $0.10 per allocated GB per month Amazon EBS also charges $0.10 per 1 million I/O requests you make to your volume.
Whereas you're on your own to decide how to cover the data kept with an instance of a machine on Amazon's EC2, other cloud offerings bake such redundancy right into their service. I checked with Salesforce.com's Adam Gross (who also sat on the panel) to find out if there's ever a point in time after a salesforce.com user commits an addition, change, or deletion that that transaction could be at risk in the event that salesforce.com's infrastructure experiences a failure of some sort.
According to Gross:
All data (regardless of when its committed) goes through multiple layers of data redundancy, including at the local level (within a data center), across data centers and geographies, and to offline / tape backup.
So, in answer to George Crump's original question regarding how risky is it to keep data in the cloud, the answer is that your mileage will vary depending on which cloud providers you're relying. Some, like salesforce.com, take certain measures to protect your data from the moment a user hits the enter button on a Web form. Others require more awareness of what is and what isn't at risk and what the solutions for remediation are.
David Berlind is an editor-at-large with InformationWeek. David likes to write about emerging tech, new and social media, mobile tech, and things that go wrong. He can be reached at email@example.com and you also can find him on Twitter and other social networks (see the list below). David doesn't own any tech stocks. But, if he did, he'd probably buy some Salesforce.com and Amazon, given his belief in the principles of cloud computing and his hope that the stock market can't get much worse. Also, if you're an out-of-work IT professional or someone involved in the business of compliance, he wants to hear from you.
Server Market SplitsvilleJust because the server market's in the doldrums doesn't mean innovation has ceased. Far from it -- server technology is enjoying the biggest renaissance since the dawn of x86 systems. But the primary driver is now service providers, not enterprises.
Top IT Trends to Watch in Financial ServicesIT pros at banks, investment houses, insurance companies, and other financial services organizations are focused on a range of issues, from peer-to-peer lending to cybersecurity to performance, agility, and compliance. It all matters.
Join us for a roundup of the top stories on InformationWeek.com for the week of October 9, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week to get the "story behind the story."