DRAM is faster and more durable than flash-based storage. But DRAM also has a tendency to lose data when it loses power, and it is more expensive than flash. This has made flash the "go-to" high-performance storage option, with DRAM storage options often shelved in favor of flash. But does DRAM still have a role in the storage infrastructure?
The fact is, data center professionals still have a keen interest in DRAM, and their interest is on the rise. The 2011 Storage Switzerland article The Advantages of DRAM-Based SSD has more daily readers now than it did when it first appeared. What's driving the increased interest in DRAM as a storage solution?
First, an increasing number of servers can support much higher RAM capacities than they did a few years ago. Even mid-range server hardware is typically able to hold more than 1TB of DRAM, and while that 1TB might cost 3 to 5 times more than flash storage, its performance capabilities are very attractive. Also, this storage area is directly accessible via the CPU slot, so there's no storage protocol interconnect to worry about -- in other words, the lowest possible latency.
Second, there are several solutions to volatility and cost challenges that a DRAM-based storage solution presents.
[ Are you paying too much for your VDI desktop devices? Read What Should A VDI Desktop Cost? ]
Volatility can be addressed by making sure your servers don't lose power. Battery backup is a common solution for most mission-critical servers. If that's not enough, volatility can be overcome by implementing a capacitor in conjunction with DRAM, as discussed in this article. In these cases, if power fails the capacitor charges the DRAM long enough for it to dump its contents to flash memory on the same DRAM circuit.
Of course, neither of these solutions will protect you from a hardware lockup. The power-saving circuitry is not likely to kick in because there is still power to the server -- you just can't use it. If DRAM is being used to store writes, data could be lost. Thanks to virtualization, the risk is likely isolated to a single VM -- but some risk remains.
The issue of cost can be overcome by using DRAM space efficiently for a cache or a tier within a caching architecture. A number of companies are already leveraging flash in this way. As discussed in this recent study, companies that add duplication or compression technologies can minimize the cost disadvantage of DRAM. Additionally, deduplication should work even better in DRAM than on flash since the redundancy check occurs at memory speeds. It should also have greater value since the cost of DRAM is higher.
These caching methods also help minimize data exposure caused by DRAM's volatility. If designed correctly, they should be able to flush the cache even if an individual VM locks up.
Therefore, DRAM's role going forward is probably as a tier within a caching technology, where writes are sent to the DRAM area and coalesced prior to being written to the flash area. This will increase application response time and increase flash life expectancy. In fact, most SSDs use a small amount of DRAM as a buffer area anyway. As discussed in this article, vendors are going to great lengths to ensure that data is correctly written and protected.
DRAM is not dead. Thanks to the memory capacity of modern servers and the intelligence of next-generation caching solutions, DRAM has a significant role to play in the storage infrastructure. It is a role we expect to see expanding over the next few years.
Attend Interop Las Vegas May 6-10 and learn the emerging trends in information risk management and security. Use Priority Code MPIWK by March 22 to save an additional $200 off the early bird discount on All Access and Conference Passes. Join us in Las Vegas for access to 125+ workshops and conference classes, 300+ exhibiting companies, and the latest technology. Register today!