SSDs: A Guide To Using Flash In The Datacenter update from February 2014

With the price of SSDs rapidly declining, it's tempting to put them in every new server or storage system. Not so fast.

Kurt Marko

February 13, 2014

6 Min Read
Network Computing logo

Silicon is the darling of the storage world: Out with spinning disks, in with flash chips. There is a lot to like about solid-state storage. It offers faster I/O, lower latency and power consumption, and instant-on from sleep states for lightning-fast access to cold data, all from smaller components easily adapted to a variety of form factors. Indeed, flash memory's miniscule use of space and power are a key enabler of mobile devices and the reason SSDs are displacing HDDs in most laptops.

But in datacenters, where storage requirements are measured in petabytes, not terabytes, flash must be used opportunistically. Despite the claims of some solid-state proponents, the all-flash datacenter is still years from becoming a reality, as described in InformationWeek's 2014 State of Storage report. But the price per bit differential between flash and disk is narrowing, albeit from a very wide gap, meaning it's rational and economical to use SSDs in more and more applications.

There are four major categories of flash product: server-side PCIe cards and SSDs, hybrid flash-HDD storage arrays, and all-flash systems using either SSDs or proprietary memory cards. The State of Storage survey found that 40% of respondents make use of SSDs in arrays, up 8 points since 2013.

However, all-flash arrays are still a niche, deployed by only 16%, with a mere 3% using them extensively. Thirty-nine percent of respondents use solid state in servers, up 10 points, with the vast majority (83%) opting for SSDs over PCIe adapters. But server deployments are still selective, with almost two-thirds of respondents using solid state in no more than 20% of their servers.

Flash Use Cases

The various product categories lead to several flash usage scenarios: as server-side application caches, as part of hybrid flash-disk storage volumes for a mix of application types, or as a dedicated tier for high-throughput, low-latency applications. Indeed, the InformationWeek survey found that databases are the most common application for solid- state storage.

Given the potential for dramatic performance improvements, it's tempting to move most transactional applications with heavy disk I/O onto all-flash systems. However, this only makes sense if the data set is relatively small and the ratio of data reads to writes is high. Smaller data sets are important since flash is still 10- to 30-times more expensive than disk, while read-heavy applications reduce the significance of the other major flaws of flash storage.

Those flaws include an asymmetry in read versus write performance, the fact that performance degrades as flash devices fill up (due to the design of NAND chips requiring data be written in pages), and limited endurance, particularly in MLC devices used in high-capacity SSDs, as measured by the number of write cycles flash memory cells can reliably deliver.

The performance of all-flash storage systems like those from Cisco (Whiptail), EMC (XtremIO), IBM (Texas Memory) and Violin, which can top a million IOPs for random reads, can't be beat. And for database applications, where every extra IOP makes a big difference, or workloads such as VDI that are subject to extreme performance spikes (think Monday morning boot storms), they make great sense.

Hybrid Storage Arrays

However, the sweet spot for flash is the hybrid array, a hot product category where flash acts as a performance accelerant for large pools of disk. Nimble, Tegile and Tintri are some of the storage companies offering hybrid arrays.

Much like a hybrid car where the electric motor handles demanding start/stop driving much more efficiently than a gasoline engine but only over a limited range, hybrid arrays provide just enough flash memory for the workloads that need them, which Nimble's VP of Marketing Radhika Krishnan estimates at between 10% and 40%. Hybrid designs allow independent scaling storage capacity and I/O throughput and hence yield a much lower total cost for workloads needing more than about 700 IOPs per TB of storage.

[Read more about how enterprises and cloud providers are finding SSDs to be an increasingly attractive low-power, faster storage option in "Solid-State Storage: On The Road To Datacenter Domination."]

Software enhancements can make some hybrid systems even more compelling. Many incorporate deduplication of primary storage that can reduce the data footprint by 5:1 or more, depending on the data type.

Likewise, some can dynamically vary the amount of flash allocated to a specific volume without disrupting underlying applications, moving workloads from HDD to flash in under a millisecond in the case of Nimble's Cache Accelerated Sequential Layout (CASL) technology. This allows swapping out HDDs for SDDs to maintain a given performance level if an array's application workload becomes much more I/O intensive. But choosing the optimal ratio of flash to disk can be tricky.

In the future, flash allocation decisions will be automated as large pools of disk and flash are dynamically assigned to virtualized cloud-like workloads based on a real time assessment of their performance. Indeed, Google is already doing just this via its Janus flash provisioning system. In a paper presented at last year's USENIX Technical Conference, Google engineers described a system that samples running applications and generates metrics about their cacheability.

"From these metrics, we formulate and solve an optimization problem to determine the flash allocation to workloads that maximizes the total reads sent to the flash tier, subject to operator-set priorities and bounds on flash write rates," they wrote.

Using test results from production workloads across several data centers, the engineers found that the algorithm improved the flash hit rate by 47–76% over a scheme using fixed flash allocations in an unpartitioned tier. In one test, the workloads averaged about 3.33 PB and flash usage fluctuated between 45TB and 100 TB over two-day test span, however the algorithm managed to keep flash read rates at within about 10% to20% of 30k IOPs, with a flash hit rate of just over 23% by varying the flash allocation between 60 and 100 TB. This type of 35:1 ratio of capacity to flash allocation demonstrates the kind of return on investment hybrid designs can provide.

NEXT: Application Requirements Dictate ChoicesFor applications with well-characterized working sets and pinned to a fixed set of machines, server-side flash makes a lot of sense, as long as you can size the cache to produce a high hit rate without breaking the bank. Virtualized workloads that often migrate between systems are generally better served with a hybrid array, since they make it easier to match flash capacity to the workload.

Another option, at least for file-based applications, is to front traditional NAS arrays with flash appliances such as Avere's FXT series that both virtualize the namespace and handle data placement to legacy NAS filers. Like hybrid arrays, this scheme allows independent optimization of flash and disk capacities.

As applications become rewritten for distributed cloud stacks, Google-like scenarios with applications using local storage and a centralized cloud controller dynamically managing storage pools that span physical servers will become more common for enterprise applications. In the meantime, software from the likes of PernixData and QLogic that can pool server-side flash and provide cache coherency across many servers is an intriguing alternative for applications running on VMware or traditional SANs.

Flash vendors promising all-flash performance at a hard disk price are no longer selling just a dream. Through judicious use of the latest storage hardware software, enterprises can come close to having it all.

About the Author(s)

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights