Flash Memory: Blurring The Line Between SLC and MLC

The flash market has become more complex with SSDs that can switch flash blocks from multi-level cell to single-level cell and back.

Howard Marks

November 6, 2014

4 Min Read
Network Computing logo

Just a few years ago, most steely-eyed storage guys had a pretty simplistic view of flash memory. Single-level cell (SLC) flash, worth well more than its weight in gold, was fast and endurant enough for crushing enterprise workloads, while everything else was suitable for laptop SSDs at best.

Most of us have evolved beyond that simplistic view, but I hear otherwise knowledgeable storage folks reveal that they don't really know what today's flash market looks like.

In part, that's because the flash foundries have blurred the line between SLC and the more cost-effective multi-level cell (MLC). Today's SSDs can actually switch the flash blocks from MLC, or even triple-level cell (TLC), to SLC and back again.

As it erases each block of 512 KB to 4 MB, the SSD's flash controller can set that block to act as SLC, storing one bit as two charge levels in the cell or MLC, which uses four charge levels to store two bits per cell. Of course, the flash controller will have to manage the change in capacity as it switches blocks between modes.

In SLC mode, these chips aren't quite as fast or endurant as pure SLC chips, which cost six times as much as MLC, and most will only allow some percentage of their flash to be in SLC mode at a time. Micron, for example, allows up to 50% SLC.

SSD controllers can use this capability in several interesting ways. At the most basic level, they can set 10% or so of their capacity as SLC and use it as a cache. That would boost performance and endurance; the very hottest data would hammer the SLC section, not the lower-endurance MLC.

More sophisticated controllers could integrate mode switching into their garbage collection process. That process could consolidate data from SLC blocks to MLC blocks, setting the new empty blocks to SLC or MLC based on load and the amount of data to be consolidated.

If the SSD has sufficient overprovisioned space remaining, it could also switch blocks from MLC to SLC as the electrons that become trapped in the tunnel diode's insulating layer make it difficult to differentiate between the four charge levels needed to store two bits. Since the 1 and 0 states are further apart in an SLC cell, a small base charge that won't erase will be less significant.

With the flash foundries blurring the line between MLC and SLC, I'm hearing the term "consumer-grade MLC." People use this term to differentiate MLC from the eMLC or HET (High Endurant Technology) variant. My problem with the term is twofold. First, it implies that eMLC is "enterprise-grade" and will be generally more reliable than the lowly "consumer-grade" flash. Second, MLC is actually faster at writes than eMLC.

In reality, chip foundries don't decide which chips will become MLC or eMLC until the final programming stage. The chips destined to become eMLC are programmed to use lower write -- and especially erase -- voltages for a longer period of time than those destined to be ordinary MLC.

The flash foundries cherry pick the best chips to receive the enterprise programming, but they also cherry pick the flash for any enterprise SSD. Even if we count client SSDs, enterprise IT takes a tiny fraction of the MLC and TLC flash created each year. Most of the flash -- 80% or so -- goes in thumb drives, memory cards, tablets, and other consumer electronics applications. When chip foundries bin chips for performance or endurance, even the most modest enterprise SSD is getting parts from the top bins.

Anyone interested in how SSDs behave as they reach, and possibly exceed, their endurance limits should read The Tech Report's SSD Endurance Experiment series of articles. The experiment involves taking six 240GB laptop SSDs and writing data to them, ultimately to the point of failure. Almost all the drives survived the writing of 700 TB (roughly 3,000 full-drive writes). However, in a 1.5PB test, after roughly 6,000 full-disk writes, only two drives remained.

The sample size is too small to draw many conclusions, except that all the drives delivered more endurance than a client computer is likely to need by a large margin.

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights