There were a couple "aha" moments for me at Interop's Enterprise Cloud Summit. The first was that some companies are already storing hundreds of terabytes of data in the cloud. The second was that it can be a slow and expensive process to move that data from one service provider to another.
The subject came up in a panel on cloud interoperability where the discussion shifted from APIs to cloud brokers to emerging standards. The panelists were Jason Hoffman, founder and CTO of Joyent; Chris Brown, VP of engineering with Opscode; consultant John Willis of Zabovo; and Bitcurrent analyst Alistair Croll. The gist was that we're still in the early going when it comes to cloud interoperability and that while Amazon's API may be the center of the cloud universe right now, it's hardly enough.
The discussion turned to portability, the ability to move data and applications from one cloud environment to another. There are a lot of reasons IT organizations might want to do that: dissatisfaction with a cloud service provider, new and better alternatives, and change in business or technology strategy, to name a few. The issue hit home earlier this year when cloud startup Coghead shut down and SAP took over its assets and engineering team, forcing customers to find a new home for the applications that had been hosted there.
The bigger the data store, the harder the job of moving from one cloud to another. Some companies are putting hundreds of terabytes of data -- even a petabyte -- into the cloud, according to the panel. Some of these monster databases are reportedly in Amazon's Simple Storage Service (S3). Indeed, Amazon's S3 price list gives a discount for data stores over 500 TB, so that's entirely feasible.
It was at this point that Joyent CTO Hoffman chimed in. "Customers with hundreds of terabytes in the cloud -- you are no longer portable and you're not going to be portable, so get over it," Hoffman said.