Vanished: The Line Between Main Memory And Primary Storage

SMART Storage Systems' ULLtraDIMM product, developed with Diablo Technologies, puts flash storage on the memory channel. It's a game changer for the storage industry.

David Hill

August 16, 2013

7 Min Read
Network Computing logo

SMART Storage Systems recently announced the availability of flash storage on the memory channel of a server. Although at first blush, this seems to be only another place in the storage food chain where flash devices can be inserted, it is actually much more. There are a number of immediate applications for this new technology, but it also has profound implications for storage in the future as the distinction between main memory and primary storage vanishes.

But before we explore the implications, let's take a closer look at SMART's product. The company said that its ULLtraDIMM product, which contains its own NAND flash memory, can be plugged into a standard DDR3 DIMM socket from where it can be acceded directly through a server memory channel. DDR3 is a DRAM interface specification, as the server implicitly assumes that it is accessing DRAM over the memory channel. A DIMM is a dual in-line memory module that typically contained DRAM chips, but to make flash acceptable to the server required memory channel expertise. That is why SMART (which SanDisk announced in July it is acquiring) partnered with Diablo Technologies.

Diablo's contribution is what it calls Memory Channel Storage (MCS), which is an interface to flash and other non-volatile memory. SMART adds its Guardian technology, which improves flash performance in various ways, including NAND endurance to manage the physics of flash in addition to error management. ULLtraDIMM is the first flash memory directly on the memory channel and no faster access to flash is possible than through this approach. That is why SMART calls it the “final frontier” for storage latency. The next closest has been the use of PCI-e flash storage and other implementations of flash, such as in arrays, which have even higher latencies. But local, persistent storage that is connected to the memory bus eliminates arbitration and data contention on the I/O hub that these other implementations have to accept.

Placing flash storage on the memory channel takes advantage of technology in conjunction with CPU performance. Memory controllers in processors have long been tuned to achieve the absolute lowest latency possible in accessing data for applications. These memory controllers are massively parallel, which is essential in scaling memory bandwidth as the number of memory modules increase. This results in the lowest latency possible as the links to the processor are direct -- usually less than 5 microseconds for write latency. That's surprisingly close to DRAM, which in inherently faster.

[2D NAND flash is fast reaching its limit. Read how Samsung's new 3D chip holds promise for flash memory's evolution in "The Next Step For Flash: 3D"]

Critically important is that this architecture enables deterministic, i.e., predictable, latency. That means even if the queue depth increases (such as hundreds or thousands of requests per second) flash on the memory channel can still deliver consistent latency to each request. (Naturally the response time may lengthen if the physical storage available is maxed out.)

The architecture also has other benefits. Both DRAM and flash can be accessed in a blend on the memory channel (in fact some DRAM is required). Depending upon requirements, flash devices can be talked to as block storage devices as if they were hard disks (HDDs) or as memory mapped devices.

Potential Applications

High-performance, demanding applications that require characteristics, such as low latency, consistent latency, and scaling, would seem like good candidates for flash storage modules attached to the memory channel.

One example is high frequency trading in financial services. With such an ultra-high performance application, why would one not use all DRAM instead of a combination of DRAM and flash? Well, the combination solution could be significantly less expensive and the small loss of performance may be well compensated by the lower price.

Other applications include increasing the transactions per second for a database application, increasing the number of virtual machines per node, servicing more virtual desktop users faster, and serving as a platform for big data analytics, as in part of a Hadoop ecosystem.

Personally, I think in-memory computing for real-time business will be a big beneficiary of this approach. For example, SAP HANA has a very powerful, sophisticated technology that currently employs DRAM as if it were primary storage and not just memory. Still, DRAM is much more expensive than flash. If I were SAP or one of the other in-memory computing suppliers, I would be dancing in the streets as Economics 101 price elasticity says that the market potential goes up dramatically if the price decreases.

NEXT Page: Game Changing Technology

So there are a number of use cases today, but the real implication of flash storage on the memory channel is for the storage industry of the future. I am not given to hyperbole, but memory-channel-attached flash storage is a game changer, a sea change, a paradigm shift, and a revolution. The prevailing paradigm today is shared storage as represented by the storage area network (SAN). Decoupling physical servers from the primary storage they use to access data required by their applications has proven highly successful for both cost and management reasons.

MCS, on the other hand, is a back-to-the-future approach with a twist. When we talk about direct attached storage (DAS) to a physical server, that means the HDDs were attached through an I/O bus of some kind and not on the memory channel itself. HDDs held primary storage, which contains the official production copy of the data. When DAS is moved to the memory channel, primary storage is also moved there. That blurs the distinction between memory and primary storage. In addition, no HDDs need apply as they can never move to the memory channel.

Now, while the MCS is useful for many use cases it is not a complete overall storage environment in the class of a SAN by itself. That requires a number of new capabilities, such as the ability to turn storage capacity across a large cluster of servers into a shared pool of storage (a virtual SAN). A number of options, such as those from PernixData and EMC’s ScaleIO, may be the answer to this. In addition, a number of storage and data management capabilities have to be made available, such as quality of service, multi-tenancy, and data protection.

But why go from a physical SAN to say a virtual SAN when not everyone needs the enhanced performance? In his book “The Innovator’s Dilemma,” Clayton Christensen discusses the concept of “good enough” performance, where performance is any dimension that provides product differentiation. Innovators can introduce disruptions with good enough performance that meets many customers’ needs without having to pay a premium price. The memory channel storage innovation seems to go against Christensen’s concept, in that the focus of this innovation is providing more than good enough performance. However, there are many high-performance requirements today that aren't met cost effectively and therefore the required performance cannot be provided at a “good enough” price.

But note that many applications may not know the value of additional performance, such as for real-time business intelligence, which is the process of delivering information for insight and possible action about business processes as they happen. They may not be able to predict performance requirements and how either planned or unplanned bursts in demand can be handled. Moreover, extra performance by itself is not bad; the bad thing is if you have to pay a premium for it (if sports cars were less expensive, more people would buy them). As cost approaches parity, the increased performance outweighs whatever premium you might pay.

And as I have discussed in previous posts, flash for primary storage may not be more expensive than a hard disk solution. Once price goes out the door as an issue and when management issues are resolved, the move will be to flash and the ultimate move will be to flash on the memory channel.

Mesabi Musings

Main memory and storage have always been seen as distinct, even though they are closely allied in function and form. The innovation provided with in SMART’s ULLtraDIMM form factor module has blurred any distinction.

Main memory can now be primary storage if one wants it to be. Very low, consistent latency can be used for high-performance applications from financial services to big data. But, in the long run, the technology is also a disruptive one to the storage industry as it is a back to a DAS view of the storage universe where servers and their storage are even more tightly coupled than ever, as opposed to the SAN shared storage universe that has proven popular.

As for now, IT teams should focus on the immediate uses and benefits of the Diablo-SMART innovation, but they should also keep their eyes peeled over the course of the next several years as to how the world of the virtual SAN will impact them.

Neither Diablo Technologies nor SMART Storage Systems is a client of David Hill and the Mesabi Group.

Read more about:


About the Author(s)

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights