Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Will SATA Express Dethrone SAS?

SAS and SATA have reigned for years as the dominant interfaces on disk drives. Both are based on the SCSI protocol, which is fine for spinning disks but carries too much overhead for server-class solid-state drives. With SSDs reaching a million IOPS, an alternative was needed, and Non-Volatile Memory express (NVMe) was created.

NVMe is a queue-based protocol that has affinities with Remote Direct Memory Access (RDMA). It is much more efficient and reduces system interrupts by big factors. But because of the RDMA features, the traditional serial link needed to be replaced by a direct connection to PCIe.

The obvious question: What do you change on a motherboard to support NVMe over PCIe? The industry has come up with an elegant solution, based on the fact that PCIe and SAS/SATA are electrically identical, and all use the same transmission line method. The chipset on the motherboard can now deliver either PCIe or SATA. The combination is called SATA Express, and both SSDs and motherboards are hitting the market.

As a result, high-performance PCIe SSDs and SATA bulk storage drives can be flexibly interchanged on servers and in storage appliances. The physical drive caddies serve either purpose, giving sys admins much more flexibility than if caddies were dedicated to each interface.

On the software side, NVMe removes most of the traditional storage stack, which has seven or eight layers of address translation and redirection. Just doing this makes NVMe very attractive as a high-end storage protocol, but it also consolidates interrupts, which reduces the state-swapping burden associated with those interrupts by a very large factor.

Where does SAS sit in this debate? It is being dethroned as the top drive interface, and the ANSI T10 committee that looks after SCSI-based products is fighting back with a combined SAS-PCIe approach called SCSI-over-PCIe or SOP. The SCSI Trade Association has coined the term "SCSI Express," but this appears to be a marketing concept more than a product at this time, while SOP is still in development.

The problem for SAS is that the industry is redoing the storage tiering approach with bulk SATA drives combined with fast SSDs. The SATA Express idea fits the bill much better than SOP, due to NVMe's high efficiency combined with the de facto bulk drive interface.

However, SAS still has some purpose. It provides interface expanders in JBODs that allow large numbers of drives to be connected to a host or storage controller. This infrastructure has been in place for many years, and it isn't likely to be supplanted soon by SATA Express. The threat to SAS in this role comes from the move toward Ethernet-connected modular storage appliances, which better match bandwidth and compute power to the performance of the underlying drives.

Ironically, Ethernet also is a potential threat to the SATA interface of bulk drives. Both Seagate and WD/HGST have released Ethernet drives, and even though it is too early to tell how these are being accepted, there are signs that direct connection to the storage network is attracting a lot of interest.

One could envision a future with SATA Express providing the high-end SSD interface and Ethernet connecting bulk drives. Ethernet switch merchant silicon will replace the SAS expanders, and SATA drives will follow SAS into the sunset. This change, if it happens at all, will take years to complete, given the conservative nature of the industry.

Most likely, we'll eventually see an interface that can support NVMe/PCIe, SATA, and SAS on the motherboard, leaving it to the market forces to determine the winners and losers.

A final note: NVMe is still evolving. Western Digital's HGST has a proprietary version (DC Express) that it claims can run much faster still, by reducing queue-handling overhead. One thing is certain: Storage interfaces are going to get much faster, but predicting the winners is tricky.