NVMe: Lower Prices, More Features Expand Use Cases

Anyone deploying NVMe-oF should examine their existing infrastructure investments, IT roadmaps, and expected workloads to choose the right flavor of NVMe-oF.

David Woolf

May 15, 2019

4 Min Read
NVMe: Lower Prices, More Features Expand Use Cases
(Image: Pixabay)

The wide adoption of Non-Volatile Memory Express (NVMe) over the last few years has completely revolutionized the storage industry in no small part due to lower prices and better performance. With the introduction of more features, such as management, more enterprises and hyperscale data centers are migrating to NVMe. The introduction of NVMe over Fabrics (NVMe-oF) promises to accelerate this trend for enterprises using a variety of infrastructures.

NVMe is architected with a layered approach, which enables NVMe data to be carried over a variety of fabric transport technologies such as RDMA (RoCE, iWARP, Infiniband), Fibre Channel, and now, TCP/IP.

NVME technologies and use cases

NVMe/FC: Fibre Channel has a long legacy as a reliable storage networking technology and has a home in many enterprise datacenters. While the Fibre Channel (FC) community has consolidated over the years, the technology is still moving forward with plans for 128G FC. The recent release of the FC-NVMe specification extends the capability of Fibre Channel SANs to carry NVMe protocol, and therefore efficiently attach NVMe SSDs.

This is an extremely important point for existing Fibre Channel customers. For enterprise datacenters with investments in Fibre Channel infrastructure, a software upgrade can enable FC-NVMe traffic to be sent alongside FCP traffic (Fibre Channel Protocol, which has its roots in SCSI protocol) on the same network, using the same infrastructure, thus extending the life of those infrastructure investment, and creating an easy path to upgrade backend storage media to NVMe. 

NVMe/RoCE: Another transport technology being leveraged for NVMe-oF is RoCE (RDMA (Remote Direct Memory Access) over Converged Ethernet). RDMA is a good choice for carrying NVMe, since it’s designed with memory access in mind, so it maps well for NVMe, which similarly, is designed for accessing flash memory. RDMA over Converged Ethernet is a collection of protocols which add capability to ethernet to take care of congestion management and robustness. However, this isn’t free. To properly deploy RoCE, users need to use RoCE capable NICs and switches. It’s important to note that RoCE capable NICs and switches can cost more than regular ethernet NICs and switches, due to their increased capability.

Most NVMe/RoCE solutions are focused on single rack deployments that need the absolutely lowest possible latency. Naturally, keeping the data as close as physically possible to the compute resources, and minimizing hops in between, is essential to keep latency low. However, those physical limitations also limit scalability.

There is some debate about whether RoCE or FC will deliver the absolute lowest latency. Each user will need to examine their own workload characteristics, as well as existing infrastructure to determine which is the right choice for their deployment.

NVMe/TCP: TCP is the newest transport protocol that has been adopted by NVMe-oF. In particular this is an important new transport type for NVMe because it can be used on regular datacenter ethernet switches, an important distinction from RoCE. While a TCP fabric may not offer the same ultra-low latency of RoCE fabrics, they do have advantages in scalability.

One issue that NVMe/TCP is well poised to alleviate is the issue of stranded flash. Flash storage can be expensive, so naturally users want to ensure that they are getting the best utilization possible. Many early NVMe deployments were servers with NVMe SSDs directly attached via PCIe. It was difficult to share these storage resources between servers while maintaining the low latency that the investment in NVMe was made for. Thus, many users were dealing with low utilization due to stranded storage.

NVMe/TCP allows storage to be shared across a TCP network with low latency in a manner that allows for better sharing of storage resources, which can eliminate the issues around stranded storage. From a cost perspective, users will be getting much more out of their investment in flash storage, and this will drive NVMe adoption even further in the datacenter.

There is a lot of well-deserved excitement around NVMe/TCP. Its ability to bring the benefits of flash storage to TCP networks will have huge implications for the adoption of NVMe-oF, and for its use in datacenter scale composed infrastructure systems. However, each NVMe-oF transport has its own strengths and ideal use cases. Anyone deploying NVMe-oF would do well to examine their existing infrastructure investments, IT roadmaps, and expected workloads in order to choose the flavor of NVMe-oF that will work for them.

About the Author(s)

David Woolf

Senior EngineerDavid Woolf leads several efforts in the areas of storage and mobile technology at the University of New Hampshire InterOperability Laboratory (UNH-IOL). He is an active participant in a number of industry forums and committees that address conformance and interoperability including the SAS Plugfest Committee, SATA-IO Logo Workgroup, and the MIPI Alliance Testing Workgroup where he serves as co-chair. In addition, David is responsible for coordination of the UNH-IOL NVMe Integrators List and plugfests.

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights