Object Storage's Path to the Enterprise

Object storage is a smart fit for enterprise use cases such as backup, but the need to write to custom APIs impedes adoption.

Howard Marks

December 5, 2012

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

I spent several days in Miami recently at the Object Storage Summit with storage industry analysts and vendors, including Cleversafe, Scality, Data Direct Networks (DDN), Nexsan and Quantum. We spent a lot of time talking not just about how the various object storage systems worked, but also about what the object storage vendors have to do to move object storage further into the mainstream. The use cases are there, but vendors must make application integration easier for enterprise customers.

Like most emerging technologies, object storage found initial acceptance in a select set of vertical markets, such as Web application vendors, cloud service providers, high-performance computing, and media and entertainment. These organizations have lots of files that get created (think Shutterfly, for example), and rather than modify those files in place, their workflows maintain each version of each object to allow for different methods of reuse.

I was a bit more surprised at the level of success the object storage vendors were having in the intelligence community. A couple of the vendors spoke (in generalities of course) about how the data collected from keyhole satellites and Predator drones is stored and processed on object platforms.

Object storage has been less successful in the commercial space, which is a shame. When I was teaching backup seminars last year, I would regularly get users complaining that incremental backups of their NAS systems took days to complete. It took that long to walk their file system and figure out which of the millions of files changed, and therefore needed to be backed up, regardless of how much new data there was.

If those users could find a way to migrate old, stale files off the production server to an object storage system, they'd dramatically speed up their nightly incremental backups and reduce the size of their weekly full backups by 60% to 90%.

The best part is that the object storage system itself never needs to be backed up, which can save the organization a bundle in opex. Object systems use replication or advanced dispersal coding to protect the data against hardware or site failures. Object storage systems also create a new object every time a user modifies a file, keeping the old version around as long as the organization's retention policy requires, so the object store doesn't need backups to protect our data from the users, either.

A major factor limiting object storage's acceptance in the corporate market is that each object storage vendor has its own SOAP or REST-based API for getting data in and out of the system. This means companies and ISPs need to customize their applications for each storage platform.

One interesting development is that vendors are adding support for the Amazon S3 API in addition to their native API. For object storage to take off in the corporate market, there has to be a standard interface for application vendors to write to.

DDN takes an API-agnostic approach on its WOS (Web Object System); it supports its native high-performance API as well as Amazon S3's and CDMI's object APIs. The company also integrates with clustered file systems such as GPFS and Lustre, which are common in the HPC world, and Hadoop's HDFS, which provides the persistence that big data file systems have lacked.

Object storage is the solution for large organizations drowning in tens or hundreds of petabytes of unstructured data. If vendors can make application integration easier, the enterprise market may open up.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights