Data storage will look very different in the future as new technologies gain traction.
Computer storage is evolving at a pace we haven’t seen in decades. New products are offering much more performance, capacity and features while costing much less. These are exciting times!
The good news (and the bad) is that future storage won’t look much like today’s “leading-edge” solutions. The whole concept of networked storage using arrays or appliances is under siege by the hyperconverged approach to delivering a storage pool.
New super-high capacity solid-state drives will effectively doom the hard drive within a couple of years, with capacities 10 times anything HDDs currently do, while hitting performance levels equal to a whole RAID array. These SSDs also are getting smaller. Toshiba and Samsung recently unveiled plans for 100 TB 2.5 inch SSDs, while super-fast M.2 drives with just a couple of square inches footprint promise to shrink the size of servers.
No one is getting excited about SANs today. The focus is on NVMe over Ethernet, which brings a new level of performance for connecting both storage appliances and hyperconverged storage to servers. Ethernet can connect all of the storage access modes (object, block and file), and carry other traffic, creating an attractive opportunity for a single fabric type in the datacenter. With 40 Gigabit Ethernet solutions in general release and 100 GbE in evaluation testing, Fibre Channel is left far behind.
At the same time, object storage is coming of age. Originally relegated to backup use, the growth of unstructured data is driving adoption of object storage raising performance questions for that class of appliances. With much better support for SSDs and better tuning, a variety of products look poised for mainstream use as universal storage supporting all three access modes described above.
We are migrating away from the old style of 60-drive cabinets to more compact boxes where a smaller number of SSDs match controller and network performance. The logical evolution of this is the Ethernet drive with object storage on board.
Software-defined storage is poised to change both the structure of hardware and the vendor base. The idea of open source code on commodity platforms isn’t new -- i.e., the Linux revolution -- and we’ll see a lot more of that, with added services from startups as an alternative to traditional monolithic storage approaches.
Let’s take a deeper look at these trends that are shaping the future of enterprise data storage.
Bigger and better SSDs
At the recent Flash Memory Summit, the major SSD vendors previewed a massive jump in capacity and performance in their product lines. Seagate has a 60 TB 3.5” product in the pipeline, while Toshiba and Samsung are both talking up 100 TB 2.5” drives. These are way bigger than the 12 TB HDDs we might see next year.
Other technologies unveiled at the summit included 10 gigabyte-per-second NVMe drives, which more than double current top-line SSDs. One version, from Seagate, comes in the compact M.2 format, which raises the possibilities of major system size shrinkage.
Falling SSD prices
In order to eat up the bulk SATA HDD market, the 60 and 100 TB SSDs must be priced close to SATA drives. This means that 3D NAND technology needs to rise to the occasion and deliver more capacity per die, while wafer capacity for 3D NAND flash has to increase dramatically.
A new approach to 3D NAND stacking where multiple 48- or 64-layer structures are created on top of each other using a “coarse-fine” 3D stacking scheme allows problems with vertical alignment to be overcome.
At the same time, vendors are repurposing their foundries from DRAM to 3D flash and substantial new foundry capacity should go on stream in 2017.
In the storage industry, there is incredible turmoil in the way new storage devices will connect. SAS and SATA have the ultimately fatal problem of needing a SCSI software stack, which can’t keep up with fast SSDs. Fibre Channel is evolving too slowly compared with Ethernet and InfiniBand solutions using RDMA, but with Mellanox hedging bets and now a leader in Ethernet too, it looks like Ethernet with RDMA is going to win.
The great news is that we’ve figured out how to connect the ultra-sleek NVMe tack structure over Ethernet/RDMA. Performance is stellar and this is the likely path for high-performance storage even in the near term.
Longer term, we likely will see Ethernet as the connection means for bulk storage drives as well as appliances and hyperconverged systems. The single-fabric approach opens up much more agility in configuration and use and fits the software-designed infrastructure paradigm much better than any alternatives.
Red Hat is bringing strong resources to Ceph development, aiming to streamline its performance and add features. This has recently resulted in a significant boost in SSD appliance performance, for example. Competitors such as Scality, DDN, and Caringo also are making great strides.
Object storage actually is already mainstream. The use of truly commodity hardware is making the access method very attractive for big data applications and the advent of block and filer portals to the object storage pool will bring storage types together on future appliances.
Object storage can scale massively – just ask CERN or AWS. Combining the approach with 100 TB SSDs, we are looking at petabyte appliances in 2U or less of rack space, so the space these consume will actually be tiny.
Non-Volatile Dual-Inline Memory Modules (NVDIMMs) change system architectures profoundly, since some of the technologies about to hit the market allow direct updates from single CPU commands, just as if the non-volatile memory is DRAM. That eliminates the need for any file stacks in the OS. The implications, for instance, on in-memory databases are far-reaching, boosting performance by huge factors.
The downside is that we are only just getting to grips with how best to use NVDIMMs. It will take a few years to figure this out, but the impact is probably the most significant of any technology around.
Intel/Micron’s new 3D X-Point memory has been delayed into 2017 by controller problems, but is widely expected to be hundreds of times faster than current flash technology. We may see NVDIMM technology as stacked CPUs and memory on small modules, reducing power consumption and increasing speed.
(Image: Netlist NVDIMM)
With most of the hype behind us, we are now seeing storage services software that meets the objective of living within virtual machines or containers on servers, while managing the actual storage devices. This trend is similar to the virtualization of servers and networking that we see in the cloud and it should boost agility and robustness of storage operations.
The software-defined storage approach allows services to scale as required, while unbundling them from the actual hardware. We can expect a robust competition in this space, with a lot of startups entering the fray, and expectations of fast-paced innovation and lower storage costs.