Ceph Poised To Change Data Center and Cloud Storage

Ceph, a type of open source file system for Linux, is hitting the mainstream with the potential to have big effects on storage in the data center and the cloud.

Jim O'Reilly

November 15, 2013

3 Min Read
Network Computing logo

You may not have heard the name Ceph before, but that's about to change. This new storage technology is about to enter the data center and change the way we look at data storage and cloud computing.

First, we need to know what Ceph is. Basically, it's an open-source file system for Linux. The opening line on the official Ceph site is a powerful boast:

Ceph uniquely delivers object, block and file storage in one unified system.

The structure of Ceph is built on top of a distributed object store, with a Representational State Transfer (REST is a protocol for transferring data to storage over the Internet) gateway, block device interface, and a Portable Operating Systems Interface (POSIX -- an industry standard for operational compliance) file system. All of the access methods can run on the same object store, and this is made up of commercial, off-the-shelf (COTS) hardware.

However, Ceph's real claim to fame is that the store can be scaled to exabytes of data by adding storage nodes, which provide the type of scale needed for large cloud-computing infrastructures, as well as big-data at some point.

There are three types of nodes in the clusters. A metadata node serves up data about the objects. This is a memory-heavy, powerful multicore x86 system. Object storage devices (OSDs) carry the data, and are much less compute intensive. Finally, monitor nodes hold the cluster map. The metadata servers load-balance dynamically, and data is striped across OSDs, so the implementation is relatively free of hot-spots.

OSDs uses a standard file system internally to provide a stable environment for storing the objects. In that sense, Ceph is a piggy-back layer, but the philosophy is to use the best of what is available without attempting to build that better mousetrap, and that seems like a sensible approach.

The current production file system recommended is XFS, which is mature, with Btrfs (B-tree file system) being the longer-term choice. Btrfs is still early life and a bit buggy, but is designed to be very extensible. It handles heterogeneous storage, has snapshots and compression already, and will have deduplication and encryption built in at a future release.

Stepping back from the techy level, Amazon S3 and OpenStack Object Storage (Swift) compatibility is built in to the Ceph product. Right now, that means that Ceph can simulate a Swift or S3 object store, but it is likely that Ceph will expand to build interfaces into these stores at a later data, potentially allowing inter-cloud bridging for geographic dispersion, extension, and for data migration.

The impact of a product like Ceph shouldn't be underestimated.

This is a unique solution right now, giving a truly unified access and view of a single common storage method. Being an open-source solution, the price is right. It has been accepted into the Linux mainstream build process, which highlights the support in the industry, and it is being taken up by other providers in the cloud, including OpenStack.Org, RedHat, and SUSE.

DreamHost is currently running a 3 Petabyte Object Store using Ceph, so it is certainly production-ready.

Ceph is being enthusiastically picked up. It's a fair bet that it will become the backbone for host-based storage software services. There's also potential to use the technology with (Virtual Storage Appliance) VSA-type systems. Ceph's Reliable Autonomic Distributed Object Store (RADOS) can and has been decoupled from the upper layers, as SUSE Cloud is doing. With it migrating features away from the storage nodes, it could change the way systems are built, and upset the Big Iron Storage applecart by moving the storage sweet spot to COTS hardware.

The evolution of Ceph, and the growth of the surrounding ecosystem, will move rapidly, and it will change storage.

 

About the Author(s)

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights