Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

What NVMe over Fabrics Means for Data Storage

The last few years have seen Non-Volatile Memory Express (NVMe) completely revolutionize the storage industry. Its wide adoption has driven down flash memory prices. With lower prices and better performance, more enterprises and hyper-scale data centers are migrating to NVMe. The introduction of NVMe over Fabrics (NVMe-oF) promises to accelerate this trend.

The original base specification of NVMe is designed as a protocol for storage on flash memory that uses existing, unmodified PCIe as a local transport. This layered approach is very important. NVMe does not create a new electrical or frame layer; instead it takes advantage of what PCIe already offers. PCIe has a well-known history as a high speed interoperable bus technology. However, while it has those qualities, it's not well suited for building a large storage fabric or covering distances longer than a few meters. With that limitation, NVMe would be limited to being used as a direct attached storage (DAS) technology, essentially connecting SSDs to the processor inside a server, or perhaps to connect all-flash arrays (AFA) within a rack. NVMe-oF allows things to be taken much further.

Connecting storage nodes over a fabric is important as it allows multiple paths to a given storage resource. It also enables concurrent operations to distributed storage, and a means to manage potential congestion. Further, it allows thousands of drives to be connected in a single pool of storage, since it is no longer limited by the reach of PCIe, but can also take advantage of a fabric technology like RoCE or Fibre Channel.

storage concept

NVMe-oF describes a means of binding regular NVMe protocol over a chosen fabric technology, a simple abstraction enabling native NVMe commands to be transported over a fabric with minimal processing to map the fabric transport to PCIe and back.  Product demonstrations have shown that the latency penalty for accessing an NVMe SSD over a fabric as opposed to a direct PCIe link can be as low as 10 microseconds.

The layered approach means that a binding specification can be created for any fabric technology, although some fabrics may be better suited for certain applications. Today there are bindings for RDMA (RoCE, iWARP, Infiniband) and Fibre Channel. Work on a binding specification for TCP/IP has also begun.

Different products will use this layered capability in different ways. A simple NVMe-oF target, consisting of an array of NVMe SSDs, may expose all of its drives individually to the NVMe-oF host across the fabric, allowing the host to access and manage each drive individually. Other solutions may take a more integrated approach, using the drives within the array to create one big pool of storage offered that to the NVMe-oF initiator. With this approach, management of drives can be done locally within the array, without requiring the attention of the NVMe-oF initiator, or any higher layer software application. This also allows for the NVMe-oF target to implement and offer NVMe protocol features that may not be supported by drives within the array.

A good example of this is a secure erase feature. A lower cost drive may not support the feature, but if that drive is put into a NVMe-oF AFA target, the AFA can implement that secure erase feature and communicate to the initiator. The NVMe-oF target will handle the operations to the lower cost drive in order to properly support the feature from the perspective of the initiator. This provides implementers with a great deal of flexibility to meet customer needs by varying hardware vs. software feature implementation, drive cost, and performance.

The recent plugfest at UNH-IOL focused on testing simple RoCE and Fibre Channel fabrics. In these tests, a single initiator and target pair were connected over a simple two switch fabric. UNH-IOL performed NVMe protocol conformance testing, generating storage traffic  to ensure data could be transferred error-free. Additionally, testing involved inducing network disruptions to ensure the fabric could recover properly and transactions could resume.

In the data center, storage is used to support many different types of applications with an unending variety of workloads. NVMe-oF has been designed to enable flexibility in deployment, offering choices for drive cost and features support, local or remote management, and fabric connectivity. This flexibility will enable wide adoption. No doubt, we’ll continue to see expansion of the NVMe ecosystem.