Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

What's Wrong With Object Storage?

Why isn't the most scalable storage technology ubiquitous? This is a question that must bug Caringo, Scality and Red Hat (with Ceph) to no end. Object storage is massively scalable. It was designed to replace block and NAS storage with a vehicle that can handle oceans of storage without arcane management techniques and slow integration times. Yet it hasn't gained much traction in the enterprise.

Object storage is a bit like Legos: It’s flexible, with the ability to absorb new drive capacities easily. It scales in a truly modular fashion, with smaller appliances that contain an appropriate increment of networking and compute power to match the added storage. The available object storage software solutions are all designed to run on standard low-cost COTS gear. No special hardware is required, so the price of units ought to be very low compared with traditional RAID-array storage.

If you read the manuals for these products, you quickly figure out that you don’t need a certification to get started. They are easy to use, with all the difficult stuff like where to put data figured out for you. If a unit fails, just pull it out of service. Once new boxes are added, the incorporation is automatic. The system maintains the number of data replicas you specify, and can rebuild a lost drive or appliance automatically.

So what's wrong with this idea? Object storage has been around as a concept since about 2003, so it should be mature and stable. To get a clue to why it isn’t ubiquitous, we need to look at its early days in more detail. Initially a design from a couple of startups (Caringo and Replicus come to mind), object storage faced a good deal of pushback from traditional block and file vendors. Both sides produced a good dose of FUD.

This FUD combined with architectural difficulties, primarily in the performance area. Those early designs were optimized for scale rather than performance. Though there were notable exceptions (Caringo sold Johns-Hopkins on building its human genome database in object, for example), this defect proved to be a roadblock in most use cases.

Then along came Amazon Web Services. Its S3 object store is the poster child for object storage technology. It's truly massive, but performance in the cloud wasn’t as big an issue at the time. Still, S3 is seen as the repository for quasi-static objects and for backup/archive storage. None of these use cases require stellar performance.

Meanwhile, object storage has been evolving. Today there are object appliances targeting the data center and the hybrid cloud and we have developed an interest in faster solutions. At the same time, a new class of data -- so called big data, which is unstructured and supposedly much bigger than our structured data -- has grown rapidly. Big Data doesn’t fit the block or file models well, but object storage can handle the scale and unstructured nature of the data properly.

The problem now is that object solutions are mostly too slow. Even open source solutions like Ceph originated in the hard-drive era and are prone to heavy back-channel communications between nodes, among other performance bottlenecks.

DataDirect Networks is something of a lone wolf in challenging the object storage performance issue. Its WOS boxes are stellar storage units with high performance levels. I suspect we can expect DDN to get some competition in 2016/2017 as the need to create better object storage for high-performance computing and big data drives  market growth. Meanwhile, there is talk of rewriting the Ceph object storage daemon (OSD) module for speed.

However, speed isn't the only issue. REST, the preferred object protocol, requires a rewrite of applications --   a major drawback in considering any move to object storage. Coupled with the speed issue, it may have been the reason object storage was relegated to backup/archive for so long, since these are pre-compiled solutions requiring no rewrite.

Remembering that the early objective of the object store was to provide a way to address an ocean of storage, “unified” storage is a new concept that uses an object store to support applications with block and file interfaces. Ceph is a good example of this unified approach, with gateways that allow iSCSI and NAS to share the storage space. It’s too soon to tell if these gateways perform adequately.

At this point, the discerning reader might ask, “What about solid-state drives?” That’s a very important question, since using fast drives should boost object store performance tremendously. It turns out that, with the exception of DDN's technology, SSD performance gets somewhat lost in the software/network overhead, reducing gains.

In Ceph, using SSD for journaling does boost performance, but going all-SSD doesn’t provide enough of a gain to be economic, due to that back-channel traffic. Adding an RDMA backend is a way to make considerable gains, but that’s expensive and moves away from the simplistic modular scaling that makes object storage so attractive.

Where does object storage go from here? Plans call for a rewrite to Ceph that should speed things a lot. The commercial companies are keeping quiet on their plans, but one can  expect them to follow suit. This means that object should be able to react well to the transition to all-SSD solutions expected in 2017. DDN has a nice edge, and shows what can be achieved, though its price point is above the commodity hardware level. With performance addressed, the era of ubiquitous object storage will begin to open up, and perhaps a decade from now, we’ll be looking back on our old use block and file systems!