Solid-state storage is marching through the datacenter, displacing disks in everything from servers to standalone storage arrays. The reasons are clear: significantly lower power, faster access times -- particularly for reads -- and most importantly, price points that make SSDs both technically feasible and fiscally preferable alternative to mechanical disks for more and more applications.
SSDs are certainly becoming more common in enterprise datacenters, according to InformationWeek's 2014 State of Enterprise Storage Survey. The survey showed 40% of survey respondents using SSDs in disk arrays, up eight points from last year, while 39% now deploy SSDs in servers, up 10 points from last year. Deployments are still broad, but not deep, as nearly two-thirds of survey respondents outfit 20% or fewer of their servers with SSDs, and just 48% have SSDs in more than 20% of their storage arrays.
While enterprise SSD use is clearly on the rise, the big driver of SSD adoption is cloud service providers (CSPs) like Apple, Facebook, Google and Microsoft using SSDs in ways few in the industry would have predicted. Early solid-state deployments focused on high-end, transaction-heavy applications where their I/O throughput meant one or two SSDs could replace a shelf full of expensive 15K rpm SAS HDDs. Today, rapid price erosion -- particularly for consumer-grade flash memory -- means CSPs are now turning to SSDs for bulk data storage and caching -- what Kevin Dibelius, director of enterprise storage at Micron, calls "read often, write few" applications.
Read-dominant applications are a good fit for cheaper consumer-grade drives since they don't exacerbate the most significant shortcoming of NAND flash devices: durability. As Nimble Storage Marketing VP Radhika Krishnan points out, there's an inherent tradeoff between flash capacity and reliability. Higher density is achieved using tighter process geometries, multi-level memory cells and less error correction data, all of which make the device less durable and reliable. This doesn't bother CSPs since they are increasingly using flash for cold, archival storage on highly redundant and distributed file systems, where a drive or even system failure isn't catastrophic.
The result is a dramatic change in flash requirements. In the past, when high IOPs, transaction-oriented workloads were the predominant SSD application, devices were typically specified to achieve 10 drive fills per day for 5 years, Dibelius said. That's 10 complete writes of every memory cell, every day for five years or almost 20,000 write cycles. Today, customers often need products only good for one-fill per day, or less -- specs that are in line with consumer-grade MLC drives, he said. Indeed, he said MLC is appropriate for about 90% of Micron's new customer inquiries.
In fact, Dibelius notes that some CSPs have even asked to buy off-spec NAND chips, i.e., those that have failed Micron's QA testing, so they can roll their own flash memory storage systems at an even cheaper price. They can do this because their cloud infrastructure is sufficiently redundant that a high level of drive failures doesn't compromise data integrity. Although Micron hasn't yet sold any of these testing room rejects out of concern over the long-term customer support implications, it's clear that for some flash buyers, price and capacity is far more important than performance, reliability and write endurance.
Even with MLC SSDs crashing through the $1/GB barrier, there's still quite a price and capacity gap between flash and hard disks. However, some flash advocates, such as John Scaramuzzo, SVP and GM of SanDisk's enterprise group, argue that the rate of solid- state memory technology evolution has so far surpassed that of magnetic hard disks that the gap is rapidly closing. HDD manufacturers resorting to increasingly abstruse and expensive techniques like shingled magnetic recording and helium-filled drives illustrate Scaramuzzo's point that HDDs "are running out of gas."
[Read why Howard Marks thinks spinning disks still will be the better bargain through 2020 in "SSDs Cheaper Than Hard Drives? Not In This Decade."]
Meanwhile, NAND flash technology marches on. Scaramuzzo predicts 4 and 8 TB drives by year end and 16 TB next year. Dibelius sets the bar even higher, claiming that Micron's new 16 nm process technology and 16-die stacks should allow the company to achieve capacities of 25 TB before needing to move onto the next so-called one-y (sub-16 nm) technology in a year or two. Of course, these will be MLC devices, but the upshot is that more and more bulk storage applications will become feasible and actually preferable to run on flash systems.
In the near-term, hybrid flash-HDD systems like those from Avere Systems, Nimble, Tegile and most of the major storage vendors can deliver all-flash performance with hard-disk economics. They do this by dynamically adjusting the size of solid state caches and storage partitions in ways that are transparent and non-disruptive to applications.
Much like flash has gradually displaced hard disks for most consumer devices, it is marching through the data center and taking up a growing slice of the storage pie. Although still a relatively small share of total storage, the emergence of flash as a viable bulk cold storage medium for CSPs coupled with the rapid pace of technology improvement, means that predominantly- or all-flash data centers will be a reality for many organizations within this decade.