Software-defined storage could turn out to be as big a disruption to the storage market as Fibre Channel or flash. If the market accepts it, software-defined storage could be bad for storage specialists like EMC and NetApp. While server vendors like HP, IBM and Dell will sell more disk drives, if software-defined storage really saves users money as promised, that won’t make up for losses in their storage divisions.
However, there is one class of vendor that will benefit: the disk drive specialists. Let's examine the impact of software-defined storage and why companies such as Seagate, Western Digital, and Toshiba will profit if it succeeds in the market place.
If used in the right places, today’s software-defined storage technologies can be significantly less expensive to buy and operate than traditional systems. Even a small remote office with two or three virtualization hosts will frequently have a storage array and storage network that cost $20,000 or more.
While New York, London and Tokyo continuously jockey for position on the list of most expensive real estate in the world, the drive slot in your enterprise array is always some of the most expensive real estate in your data center. Once we amortize the cost of the controllers and software upgrades across all the slots in a typical array, the total cost of a slot in a VMAX, Compellent or NetApp FAS is $1,200 to $1,500 or more.
By comparison, server drive slots are almost free. Walk around the average data center and you’ll see 1U and 2U servers with four to 24 drive bays using at most a mirrored pair of disks to boot from. Replacing that pair of 146-Gbyte drives with a pair of 32-Gbyte SATA disk on module (DOM) devices that tuck inside the server will free up that pair of drive slots and speed up the boot process for about $150. n the DeepStorage lab, we go a step further and boot our servers to VMware ESXi from USB thumb drives.
Even if you’re paying your server vendor’s markup on disk drives, unless your software vendor charges $10,000 per host for its virtual SAN software, software-defined storage will save you money.
However, while software-defined storage can save you some money, you will end up buying more disk drives because software-defined storage isn’t terribly space efficient. If you have an external disk array, you’re probably using some form of parity RAID to protect the data in your capacity tier. Overhead for parity will range from 8% to33%, depending on the layout of your RAID sets.
[Traditional disk drives may lack the flash of flash drives, but Seagate recently showed there's still room for innovation with spinning disks. Read about the new disk drives the company launched this summer in Seagate Proves Disk Drives Aren't Dead.]
A typical external storage array, with the exception of some single controller models at the low end, has essentially no single point of failure. Redundant power supplies feed redundant controllers that are connected through a passive backplane for each shelf to drives with redundant connections. Servers, on the other hand, have all sorts of unique components from the motherboard chipset to the single SAS/SATA controller.
This means to achieve a reasonable level of availability, basically anything beyond 3 nines, a software defined storage solution will need to protect data against not just disk drive failures but also server node failures. The only practical way to do that is replicate the data to another node, creating two or more full copies of the data. Some systems add even more overhead, requiring local RAID.
I know some of you are thinking: Why not use parity RAID or erasure codes across all the nodes? Those will work well for sequential I/O, but the overhead of erasure coding with small random writes would be enormous.
More replicas mean our data centers will need more disk drives to hold the same amount of data, which brings us back to how software-defined storage will be good for Western Digital, Toshiba, and of course, Seagate.
Looking to improve the effectiveness of your IT organization and ensure alignment with the business? Check out the two-day workshop Principals of Effective IT Management Sept. 30-Oct 1. at Interop New York.Howard Marks is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage ... View Full Bio