Interop: Cloud Computing's Portability Gotcha

There were a couple "aha" moments for me at Interop's Enterprise Cloud Summit. The first was that some companies are already storing hundreds of terabytes of data in the cloud. The second was that it can be a slow and expensive process to move that data from one service provider to another.

John Foley, Editor, InformationWeek

November 19, 2009

3 Min Read
InformationWeek logo in a gray background | InformationWeek

There were a couple "aha" moments for me at Interop's Enterprise Cloud Summit. The first was that some companies are already storing hundreds of terabytes of data in the cloud. The second was that it can be a slow and expensive process to move that data from one service provider to another.The subject came up in a panel on cloud interoperability where the discussion shifted from APIs to cloud brokers to emerging standards. The panelists were Jason Hoffman, founder and CTO of Joyent; Chris Brown, VP of engineering with Opscode; consultant John Willis of Zabovo; and Bitcurrent analyst Alistair Croll. The gist was that we're still in the early going when it comes to cloud interoperability and that while Amazon's API may be the center of the cloud universe right now, it's hardly enough.

The discussion turned to portability, the ability to move data and applications from one cloud environment to another. There are a lot of reasons IT organizations might want to do that: dissatisfaction with a cloud service provider, new and better alternatives, and change in business or technology strategy, to name a few. The issue hit home earlier this year when cloud startup Coghead shut down and SAP took over its assets and engineering team, forcing customers to find a new home for the applications that had been hosted there.

The bigger the data store, the harder the job of moving from one cloud to another. Some companies are putting hundreds of terabytes of data -- even a petabyte -- into the cloud, according to the panel. Some of these monster databases are reportedly in Amazon's Simple Storage Service (S3). Indeed, Amazon's S3 price list gives a discount for data stores over 500 TB, so that's entirely feasible.

It was at this point that Joyent CTO Hoffman chimed in. "Customers with hundreds of terabytes in the cloud -- you are no longer portable and you're not going to be portable, so get over it," Hoffman said.

It can take weeks or months to move a petabyte of data from one cloud to another, depending on data transfer speeds, Hoffman said. There are also transfer fees to consider. Amazon charges 10 cents per GB to transfer data out of S3, which comes to $100,000 per PB. (That's after you've already spent $100,000 or more in data transfer fees moving the data into S3.)

In other words, the more data that IT organizations in business and government put into a storage cloud, the longer it takes, and more expensive it becomes, to move it out. Amazon estimates it would take one to two days to import or export 5 TBs of data over a 100 MBPS connection. To get around such limitations, Amazon Web Services offers a work-around called AWS Import/Export (in beta testing) that lets customers load or remove data using portable storage devices, bypassing the network. "If loading your data would take a week or more, you should consider AWS Import/Export," Amazon advises.

What's the lesson here? Getting into the cloud may be fast, cheap, and easy, but the longer you're there, the harder it is to move. Be prepared and have an exit plan.



InformationWeek and Dr. Dobb's have published an in-depth report on how Web application development is moving to online platforms. Download the report here (registration required).

About the Author

John Foley

Editor, InformationWeek

John Foley is director, strategic communications, for Oracle Corp. and a former editor of InformationWeek Government.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights