Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Fast SSD Storage, Slow Networks

Systems built with solid state disks (SSD) represent the cutting edge in performance and are the "go-to" option for data centers looking to solve performance problems. So these systems should be coupled with the absolute cutting edge in storage I/O performance, too, right?

In some cases, they do need a high-performance network. But in many cases, the server--or the application running on that server--cannot take full advantage of a high-speed network, even if they can take advantage of SSD.

While the options for high-speed networking are increasing and getting more affordable, it's still a considerable investment to upgrade a storage network infrastructure. Not only is there the cost of the physical components like switches, cards, and cabling, there's also the time required to reconnect each new server.

[ For more advice on affordable SSD storage, see How To Choose Best SSD For Midsize Data Centers. ]

This combined cost can be difficult to justify, especially if the server or application can't take full advantage of the new network. But in many cases, SSD can make a significant performance difference, even on a slow, un-optimized network. We proved this in our labs during a recent test drive.

If you don't have the budget to upgrade your network infrastructure, or if your servers can't take advantage of that upgrade, you have several options. One is to install a system designed to take advantage of SSD anyway. As I mentioned, some database applications see a significant performance increase simply by adding SSD and not upgrading the network.

Another option is to add SSD to the server using a drive form-factored SSDs or PCIe SSD. As I discussed in a recent article, one solution is to install PCIe SSD in the most performance-sensitive hosts in your environment and then leverage the PCIe for memory swap space and as a read cache.

The storage network is still necessary to save new or changed data (writes), but that's all it is needed for. Reads come high-speed directly from the host's PCIe bus, so the storage network is essentially cleared for only write traffic. The result is improved performance for both read and write.

In some cases, you might need shared read and write flash performance. This is more economical if flash capacity can be shared across hosts via a network, but you still don't necessarily need to upgrade the network. As I discussed in another recent article, some SSD systems now including built-in networking and can support direct attached 1GbE connections. Simply trunk 2-4 connections per host to provide the performance they need at a price the data center can afford.

If you don't have the budget or the applications to justify a high-speed network upgrade, try enhancing your existing infrastructure for an affordable way to increase performance.

Even small IT shops can now afford thin provisioning, performance acceleration, replication, and other features to boost utilization and improve disaster recovery. Also in the new, all-digital Store More special issue of InformationWeek SMB: Don't be fooled by the Oracle's recent Xsigo buy. (Free registration required.)