NVDIMM: A New Breed Of Storage

An emerging storage technology promises big performance benefits for data centers and smartphones alike.

Jim O'Reilly

April 29, 2016

5 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Non-Volatile Dual-Inline Memory Modules are a brand new and very different storage technology. Instead of storing data on a drive or even a flash card, NVDIMMs replace the traditional memory sticks in a server and allow the data to remain in place when power is removed from the system. Mainstream system vendors are joining the NVDIMM party with IBM, Hewlett-Packard Enterprise, and SuperMicro (with Netlist) all announcing products.

This is achieved by adding flash to the DRAM on the DIMM. When power goes away, either deliberately or by a failure, the data from the DRAM is copied into a NAND flash memory. This allows the data to be instantly available, without reloading from a drive, when power is restored.

In and of itself, this isn’t enough to make DRAM persistent. The operating system and compilers, as well as the applications themselves, all need to know that the data is persistent to be able to take advantage of the non-volatility. This has led to the invention of two distinct NVDIMM models. Both promise huge performance benefits and are poised to impact everything from the data center to the smartphone.

One type -- NVDIMM-N -- uses a lot of DRAM, about 16GB per DIMM, and matches it with an equal amount of flash. This type of NVDIMM is blindingly fast, since the data transfers at DRAM speeds back and forth. It's best suited for use cases where the speed of access to persistent data is very important, but it's limited to around a terabyte of total memory today.

NVDIMM-N is byte addressable. NVDIMM-N is, after all, DRAM most of the time! This allows databases to be built in memory, for example, with direct access to records, obviating disk IO and all the overhead that involves. A memcachedb structure would be dramatically faster than even the best solid-state solution, with updates just requiring a register-to-memory computer instruction instead of the file stack and interface overhead.

Since this looks like DRAM to the system, using RDMA to create redundancy and cluster sharing is a given, with existing designs working just fine. There is, however, a downside to the NVDIMM-N. To get byte addressability, the compiler, link editor and applications all need to be modified. A persistent memory type has to be defined to separate the space from any standard DRAM in the system while linking requires the data for an application to be linked to the app on loading.

In the app itself, recognizing that some memory is persistent while other segments are not requires a way to allocate space using the definition sections of the source code. All of these changes are in their infancy, but the result will be the ability to store data just by using inline code, rather than today’s method of reading a part of a sector, changing the rest of the sector (read-modify-write) and then writing 4KB chunks to the disk. The IO-disconnect, process-swapping, and interrupt handling overhead all goes away. In the interim, NVDIMM-N can be treated as persistent RAMDisk and still is exceptionally fast.

Another type of NVDIMM, NVDIMM-F, just uses a small amount of DRAM as a buffer to a very large flash. This uses the DRAM bus to write and read to a large flash memory, perhaps as much as several terabytes per DIMM pair.  NVDIMM-F is an alternative to a PCIe SSD, with faster transfer rates and lower overhead. It is considerably slower than NVDIMM-N, but its use cases are the same as for solid-state drives, while economics dictate that it is for the most demanding cases today. NVDIMM requires the addition of drivers, but uses the standard IO stack approach to access data.

Both types of NVDIMM are available today from most system vendors and directly from distribution. There is a lively community of inventors making the technology even more useful; the community forum can be accessed via SNIA. With the huge performance benefits of the NVDIMM approach, we can expect to see revolutions in everything from battlefield computers to big-data analytics systems. Over time, as prices drop, this technology will invade almost all systems, and the result will be smaller, but more powerful, cloud clusters and even better smartphones.

Netlist NVDIMM

Netlist NVDIMM.png

Technology is still evolving, however! Micron and Intel are bringing a third NVDIMM type to the market, using an alternative to flash. This memory, called 3D X-Point, promises much faster access speeds than flash, even approaching just a few DRAM cycles per access. This type of speed will really accelerate market acceptance of the NVDIMM-N approach, while being a formidable challenge to flash SSD in its own right.

Also emerging are the versions of Hybrid-Memory-Cube (HMC) packaging developed by vendors such as Intel, AMD and NVIDIA. These all mount the memory complex and CPU together on a silicon module, and in some variations even provide for 3D stacking of the devices. The technology allows very compact packaging, which leads to lower server power and much higher memory bandwidth.

When HMC is combined with NVDIMM technology, it’s easy to see that servers are about to undergo the largest revolution in design in decades. The implications will ripple throughout the systems. The need for fast SSDs will change. Fewer, more powerful systems mean smaller data center footprints. The balance between network and local storage will be revisited, while virtual SANs and hyper-converged systems may become more common. And a lot of programmers will be rewriting apps for much faster operation, leading to even smaller data center footprints!

Interop logo

interop-las-vegas-small-logo.jpg

Learn more about the changing storage landscape in the Storage Track at Interop Las Vegas this spring. Don't miss out! Register now for Interop, May 2-6,

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights