Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

6 Reasons SSDs Will Take Over the Data Center

  • The first samples of flash-based SSDs surfaced 12 years ago, but only now does the technology appear poised to supplant hard drives in the data center, at least for primary storage. Why has it taken so long? After all, flash drives are as much as 1,000x faster than hard-disk drives for random I/O.

    Partly, it has been a misunderstanding that overlooks systems, and focuses instead on storage elements and CPUs. This led the industry to focus on cost per terabyte, while the real focus should have been the total cost of a solution with or without flash. Simply put, most systems are I/O bound and the use of flash inevitably means needing fewer systems for the same workload. This typically offsets the cost difference.

    The turning point in the storage industry came with all-flash arrays: simple drop-in devices that instantly and dramatically boosted SAN performance. This has evolved into a model of two-tier storage with SSDs as the primary tier and a slower, but cheaper, secondary tier of HDDs

    Applying the new flash model to servers provides much higher server performance, just as price points for SSDs are dropping below enterprise hard drive prices. With favorable economics and much better performance, SSDs are now the preferred choice for primary tier storage.

    We are now seeing the rise of Non-Volatile Memory Express (NVMe), which aims to replace SAS and SATA as the primary storage interface. NVMe is a very fast, low-overhead protocol that can handle millions of IOPS, far more than its predecessors. In the last year, NVMe pricing has come close to SAS drive prices, making the solution even more attractive. This year, we'll see most server motherboards supporting NVMe ports, likely as SATA-Express, which also supports SATA drives.

    NVMe is internal to servers, but a new NVMe over Fabrics (NVMe-oF) approach extends the NVMe protocol from a server out to arrays of NVMe drives and to all-flash and other storage appliances, complementing, among other things, the new hyper-converged infrastructure (HCI) model for cluster design.

    The story isn’t all about performance, though. Vendors have promised to produce SSDs with 32 and 64TB capacity this year. That's far larger than the biggest HDD, which is currently just 16TB and stuck at a dead-end at least until HAMR is worked out.

    The brutal reality, however, is that solid-state opens up form-factor options that hard disk drives can’t achieve. Large HDDs will need to be 3.5 in form-factor. We already have 32TB SSDs in a 2.5 inch size and new form-factors, such as M2.0 and the "ruler"(an elongated M2.0), which will allow for a lot of capacity in a small appliance. Intel and Samsung are talking petabyte- sized storage in 1U boxes.

    The secondary storage market is slow and cheap, making for a stronger barrier to entry against SSDs. The rise of 3D NAND and new Quad-Level Cell (QLC) flash devices will close the price gap to a great extent, while the huge capacity per drive will offset the remaining price gap by reducing the number of appliances.

    Solid-state drives have a secret weapon in the battle for the secondary tier. Deduplication and compression become feasible because of the extra bandwidth in the whole storage structure, effectively multiplying capacity by factors of 5X to 10X. This lowers the cost of QLC-flash solutions below HDDs in price-per-available terabyte.

    In the end, perhaps in just three or four years flash and SSDs will take over the data center and kill hard drives off for all but the most conservative and stubborn users. On the next pages, I drill down into how SSDs will dominate data center storage.

    (Image: Timofeev Vladimir/Shutterstock)

  • System performance

    In the 37 years since the introduction of the x86 architecture, CPU power increased by a huge factor, doubling every couple of years. In that same timeframe, hard drives improved by a factor of 3X in random access. Storage arrays brought some parallel access across data stripes, but nowhere near the CPU improvement.

    Along comes the SSD, where a single drive is way faster than even a large HDD array. This has led us to rethink systems, with the conclusion that relieving I/O starvation means we need many fewer servers for the same workload. For example, NVMe SSDs are so fast that they can support in-memory databases running as much as 100X faster than the traditional HDD systems.

    We are already seeing the results of this new view. Traditional storage array sales are dropping while vendors are replacing the enterprise SAS drives used in servers with flash. It’s fair to say that fast HDDs are a dying breed.

    (Image: jules2000/Shutterstock)

  • Flash die pricing

    The shift to 3D NAND was not smooth, which stalled the price decline last year. With production problems  now a part of history, 3D NAND is a solid technology. Prices are beginning to fall again, though the gap is still roughly three to one in favor of slow HDDs.

    This gap is providing some relief to HDD manufacturers, but storage tiering also factors into the TCO of a complete cluster, which changes the debate toward favoring SSD usage even in bulk storage.

    This year, we'll see many new flash foundries coming on stream. Combined with the move to 3D NAND, die stacking, and QLC technology, they will produce a new generation of large capacity, read-mostly storage that will take over secondary storage space as a cheaper and more compact option than any HDD solution. Again, fewer storage appliances will be needed for any given capacity.

    (Image: Kichigin/Shutterstock)

  • Storage tiering

    With the storage model evolving from large numbers of parallel HDDs towards fast SSDs, the concept of tiering storage between fast primary storage and slow secondary storage has taken off. HDDs met the need for slow bulk storage quite well, but rapid growth in SSD capacity and the advent of lower cost flash such as 3D NAND and QLC cells signal a  shift to flash-based secondary storage.

    Add in compression and deduplication, both facilitated by high SSD bandwidth, and the effective size of secondary storage is expanded by 5X to 10X in most use cases. Dedupe and compression are used for fast primary storage to save on transmission costs and bandwidth to secondary networked storage, drive massive savings in acquisition costs for secondary storage.

    (Image: Godsgirl_madi/Pixabay)

  • New storage software

    SSDs have the advantage of performance and we now see startups with creative ideas on “mining” secondary data tiers. This is an example of the software-defined storage approach that will lead us to chain data services together to achieve results. The low access latency of the SSD becomes important here as data processes through the chains.

    Object storage is evolving to the SDS model and we already can see stellar performance in object stores using SSDs today. I predict that file and block I/O will rapidly become just access protocols to an underlying object store, achieving a unified and simpler storage model.

    (Image: MaLija/Shutterstock)

  • NVMe over Ethernet

    Along with much faster SSD performance, cluster fabrics have jumped in bandwidth. In 2010, 1 Gigabit Ethernet was considered hot. Today, 400GbE backbones are hitting the shelves. More importantly, RDMA support is now common. RDMA frees up a great deal of CPU time moving data.

    The SCSI protocol is ending its 30-year dominance of storage as NVMe, a much more efficient protocol, replaces it. SSDs using this protocol achieve millions of IOPS, which enables big data analytics, among other applications.

    We now see NVMe tailored to operate over Ethernet. This will enable direct connection of SSDs to the cluster fabric, adding a new dimension of speed and connectivity to hyperconverged infrastructure.

    (Image: Toria/Shutterstock)

  • New form factors

    Unlike HDDs, SSDs are free of disk size constraints. Already, there are 32TB 2.5 inch SSDs and this year, suppliers will sample 64TB in the same form factor. HDDs are topped out until HAMR technology arrives in 2019-2020. Even then, large capacity HDDs will be 3.5 inch.

    This means that servers and appliances with SSDs can get much more capacity in the same size. For example, a 2U server can mount 24 SSDs (768TB) today, compared with 12 HDDs for around 180TB.

    There’s more! The M2.0 SSD form factor is even more compact, resulting in new appliances with “ruler” drives with 32TB of raw capacity. That’s a whole storage farm in a 1U box and it's sampling now! If you compare that with traditional arrays or even today's server-based nodes, you can see the saving in appliance cost and footprint. And if dedupe and compression are turned on, 5PB/U is even possible.

    (Image source: Intel)