Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

8 Ways Data Center Storage Will Change in 2018

  • The storage industry was on a roller coaster in 2017, with the decline of traditional SAN gear offset by enterprise interest in hyperconverged infrastructure, software-only solutions, and solid-state drives. We have seen enterprises shift from hard disks to solid-state as the boost in performance with SSDs transforms data center storage.

    2018 will build on these trends and also add some new items to the storage roadmap. SSD is still evolving rapidly on four fronts:  core technology, performance, capacity and price. NVMe has already boosted flash IOPS and GB per second into the stratosphere and we stand on the brink of mainstream adoption of NVMe over Ethernet, with broad implications for how storage systems are configured going forward.

    Vendors are shipping 32TB SSDs, leaving the largest HDD far behind at 16TB. With 3D die technology hitting its stride, we should see 50TB and 100TB drives in 2018, especially if 4-bit storage cells hit their goals. Much of the supply shortage in flash die is behind us, and prices should begin to drop again, though demand may grow faster than expected and slow the price drop.

    Outside of the drives themselves, RAID arrays are in trouble. With an inherent performance bottleneck in the controller design, handling more than a few SSDs is a real challenge. Meanwhile, small storage appliances, which are essentially inexpensive commercial off-the-shelf servers, meet the need of object stores and hyperconverged nodes. This migration is fueled by startups like Excelero, which connect drives directly to the cluster fabric at RDMA speeds using NVMe over Ethernet.

    A look at recent results reflects the industry's shift to COTS. With the exception of NetApp, traditional storage vendors are experiencing single-digit revenue growth, while original design manufacturers, which supply huge volumes of COTS to cloud providers, are collectively seeing growth of 44%. Behind that growth is the increasing availability of unbundled storage software. The combination of cheap storage platforms and low-cost software is rapidly commoditizing the storage market. This trend will accelerate in 2018 as software-defined storage (SDS) begins to shape the market.

    SDS is a broad concept, but inherently unbundles control and service software from hardware platforms. The concept has been very successful in networking and in cloud servers, so extending it to storage is not only logical, but required. We’ll see more SDS solutions and competition in 2018 than we’ve had in any year of the last decade.

    NVMe will continue to replace SAS and SATA as the interface for enterprise drives. Over and above the savings in CPU overhead that it brings, NVMe supports new form-factor drives. We can expect 32TB+ SSDs in a 2.5 inch size in 2018, as well as servers using M.2 storage variants.

    This has massive implications. Intel has showcased an M.2 “ruler” blade drive with 33+ TB capacities that can be mounted in a 1U server with 32 slots. That gives us a 1 Petabyte, ultra-fast 1U storage solution. Other vendors are talking up similar densities, signaling an important trend. Storage boxes will get smaller, hold huge capacities, and, due to SSD speed, outperform acres of HDD arrays. You’ll be able to go to the CIO and say, "I  really can shrink the data center!"

    There’s more, though! High-performance SSDs enable deduplication and compression of data as an invisible background job. The services doing this use the excess bandwidth of the storage drives. For most commercial use cases, the effective capacity is multiplied 5X or more compared with raw capacity. Overall, compression reduces the number of small appliances needed, making SSD storage much cheaper overall than hard drives.

    Let’s delve into the details of all these storage trends we can expect to see in the data center this year.

    (Image: Olga Salt/Shutterstock)

  • SSD performance

    Top-end SSDs can already achieve 10GB per second throughput and millions of random IOPS. The performance differences between enterprise and commodity SSDs lie mainly in the parallelism of the internal access paths and the efficiency of NVMe interfaces. 

    NVMe uses PCIe connections which are electrically almost identical to SATA and NVMe is roughly the same cost. This is fueling a migration to NVMe that has already crushed SAS as an SSD interface and potentially could reach across all SSD usage in 2018-2019.  Already, M2 .0 form factor SSDs with NVMe/PCIe interfaces are in the market, at roughly the same price as SATA drives. Adoption of NVMe will speed all SSDs, though very top end drives will still have an edge due to parallelism.

    (Image: jules2000/Shutterstock)

  • SSD capacity increases

    At the Flash Memory Summit in September, vendors announced plans for 100TB 2.5” drives. We might see these in 2018, but capacity more likely will settle at 50 or 64TB, twice the size of today’s largest SSD. With the reduction of space, power and especially box count these large capacities represent, the TCO of storage firmly shifts over to SSDs across the board, even at the bulk level.  Add in the roughly 5X boost in effective capacity represented by compression, and 32+ TB SSDs hammer HDDs even for archiving.

    Quad-Level Cells (QLC) will further boost capacity, though write wear is much more pronounced with the stored voltage levels in the cell being closer together. Until error-correction and signal processing improvements due in 2018 really take hold, QLC will likely be used only as a cold data drive, directly challenging the SATA bulk 3.5 drives.

    How much capacity per drive is enough? That’s a tough question, since the ultra-large drives change the rules of usage. We could have debates about the bandwidth per Terabyte, but that won’t matter with cold storage. On the other hand, large capacity implies many NAND die, which means more parallelism and performance, so these drives could also act as  primary drives, too.

    Sharing architectures such as drive-native NVMe over Ethernet will impact which SSD interface is chosen. We might see these drives with 40GbE or faster ports, allowing high-bandwidth sharing.

    (Image: jules2000/Shutterstock)

  • RAID decline accelerates

    RAID is succumbing to the inevitability of all-flash arrays and SSD storage appliances. These devices are just too fast for the array architectures. With the capacity gains in SSDs and their 2.5 inch form-factor, the box count needed for a given amount of storage is much lower with SSDS, too.

    The SAN paradigm has worked well for the industry for decades, so the decline has, to date, been slow given the performance advantages of all-flash arrays over RAID arrays. This could be due to enterprise reluctance to tinker with something's that's worked, as well as interest in getting a return on  the undepreciated portions of expensive arrays My sense is that 2018 there will be a sea-change in that attitude and a more concerted shift away from RAID toward hyperconverged infrastructure and compact storage appliances. The benefits of erasure coding over RAID will reinforce this, as will the use of compression.

    (Image: Alex Munt/Shutterstock)

  • The rise of COTS

    Commercial-of-the-shelf components are here to stay and in fact are growing rapidly in both server and storage market shares. With economic and performance benefits for the buyer, it’s hard to favor proprietary hardware platforms. The impact of the COTS approach will noticeably strengthen in 2018. The numbers support this trend: Revenue for COTS-based ODMs grew 44% in 3Q17, while most suppliers were flat.

    Compounding COTS growth will be the rapid expansion of storage software options, from both established vendors and startups. COTS are a better fit for concepts like SDS and HCI.

    COTS gear is easy to integrate, because interfaces and architectures fit the Lego model of full interchangeability. This means buyers don’t need large integration teams, especially with the ready availability of   third-party fulfillment and integration.

    (Image: Pilar Alcaro/Shutterstock)

  • Software-defined storage

    Software-defined storage has been hyped for a while with its original value-proposition of agile, scalable and low-cost virtualized storage services. Last year, the technology began maturing as vendors announced quite a few offerings, especially among object storage stacks. This trend will accelerate in 2018, rendering hardware an interchangeable commodity and migrating the storage value to software and solutions integration. The move towards hybrid clouds supports the growth of SDS.

    Storage administration will change, becoming more automated, much like cloud servers. This will impact data center personnel and admins would be wise to garner the new skills needed.

    (Image: Evannovostro/Shutterstock)

  • NVMe over Ethernet

    NVMe over Ethernet is just entering the market, but its impact on the fast-growing HCI segment is so clear that adoption is likely to be strong in 2018. The idea of sharing all storage in a system cluster has been with us a while, but performance over the fabric has always been much slower than local drives, limiting interest somewhat.  NVMe over Ethernet solves that problem.

    There is some discussion about NVMe over Fibre Channel and of course it runs on InfiniBand, but the Ethernet solutions available already are much cheaper and shipping in quantity. The commonality of fabric with the server cluster is an Ethernet game-winner, though, so we can expect other solutions to be niche players.

    Expect NVMe over Ethernet SSD drives in the second half of 2018, allowing much more architectural flexibility, with both server nodes sharing drives with the cluster and dedicated boxes of drives, all directly connected to the fabric. With the advent of ARM64 CPUs like the latest Snapdragons, we should plan for these drives to be intelligent, with object code, big data crunching and such onboard.

    (Image: doctor-a/Pixabay)

  • Ultra-dense packaging and data compression

    SSD vendors are exploring the option of using much smaller packaging than 2.5 inch drive footprints. M2.0 is already common, and is being carried to new lengths, literally, with elongated storage blades. These use so little power that high-packing density is achievable; we can therefore expect Intel to deliver on its plan for a 32-blade 1U appliance with a petabyte of SSD storage.

    Couple that with compression, which coincidentally increases network throughput by the same factor as data is compressed, and a storage appliance could effectively support 5PB in just 1U. It will mean goodbye storage farm, as all those magnificent RAID cabinets become obsolete. With vendors other than Intel planning to move in the same direction, storage and systems are going to be very different by 2019.

    (Image: OpenClipart-Vectors/Pixabay)


    The idea of fast-access persistent memory on the CPU’s memory bus is a paradigm-shifting concept that has now firmly arrived on our doorstep. There are short-term implications for this year, and longer-term trends, but what is certain is that IT shops ignore the trend at their peril.

    In the short-term, through 2018, NVDIMMs using flash are faster than SSDs using flash, by about a 2X factor. Optane will take that to 4X or better. This assumes that NVDIMMs emulate block-I/O devices and look like a drive, so that the OS file stack is invoked and 4KB blocks are moved around.

    However, all NVDIMMs can potentially be byte-addressable. NAND-based solutions have SRAM or DRAM buffers, which are backed up on power fail. Optane, moreover, can be written directly. All of this opens up the prospect of moving I/O from the file approach to using single register-memory CPU instructions and addressing the persistent space as part of DRAM.

    Many changes in the ecosystem are required to make this work. The OS needs to know that some memory is persistent as do compilers, link editors and system tools.  The hardest part is that all apps using the byte-mode will need to be extensively rewritten in the area of file I/O. Some use cases, such as databases, will be early adopters, since the changes can be masked from users. It will take some time for byte-mode to become mainstream, though benefits such as 1000X performance of I/O will create a lot of pressure to do so.

    (Image source: HPE)