Why Software-Defined Storage Is Good For Seagate

Some vendors won't fare well if software-defined storage takes off, but disk drive specialists will reap benefits. Here's how.

Howard Marks

September 5, 2013

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Software-defined storage could turn out to be as big a disruption to the storage market as Fibre Channel or flash. If the market accepts it, software-defined storage could be bad for storage specialists like EMC and NetApp. While server vendors like HP, IBM and Dell will sell more disk drives, if software-defined storage really saves users money as promised, that won’t make up for losses in their storage divisions.

However, there is one class of vendor that will benefit: the disk drive specialists. Let's examine the impact of software-defined storage and why companies such as Seagate, Western Digital, and Toshiba will profit if it succeeds in the market place.

If used in the right places, today’s software-defined storage technologies can be significantly less expensive to buy and operate than traditional systems. Even a small remote office with two or three virtualization hosts will frequently have a storage array and storage network that cost $20,000 or more.

While New York, London and Tokyo continuously jockey for position on the list of most expensive real estate in the world, the drive slot in your enterprise array is always some of the most expensive real estate in your data center. Once we amortize the cost of the controllers and software upgrades across all the slots in a typical array, the total cost of a slot in a VMAX, Compellent or NetApp FAS is $1,200 to $1,500 or more.

By comparison, server drive slots are almost free. Walk around the average data center and you’ll see 1U and 2U servers with four to 24 drive bays using at most a mirrored pair of disks to boot from. Replacing that pair of 146-Gbyte drives with a pair of 32-Gbyte SATA disk on module (DOM) devices that tuck inside the server will free up that pair of drive slots and speed up the boot process for about $150. n the DeepStorage lab, we go a step further and boot our servers to VMware ESXi from USB thumb drives.

Even if you’re paying your server vendor’s markup on disk drives, unless your software vendor charges $10,000 per host for its virtual SAN software, software-defined storage will save you money.

However, while software-defined storage can save you some money, you will end up buying more disk drives because software-defined storage isn’t terribly space efficient. If you have an external disk array, you’re probably using some form of parity RAID to protect the data in your capacity tier. Overhead for parity will range from 8% to33%, depending on the layout of your RAID sets.

[Traditional disk drives may lack the flash of flash drives, but Seagate recently showed there's still room for innovation with spinning disks. Read about the new disk drives the company launched this summer in Seagate Proves Disk Drives Aren't Dead.]

A typical external storage array, with the exception of some single controller models at the low end, has essentially no single point of failure. Redundant power supplies feed redundant controllers that are connected through a passive backplane for each shelf to drives with redundant connections. Servers, on the other hand, have all sorts of unique components from the motherboard chipset to the single SAS/SATA controller.

This means to achieve a reasonable level of availability, basically anything beyond 3 nines, a software defined storage solution will need to protect data against not just disk drive failures but also server node failures. The only practical way to do that is replicate the data to another node, creating two or more full copies of the data. Some systems add even more overhead, requiring local RAID.

data protection model chart

data protection model chart

I know some of you are thinking: Why not use parity RAID or erasure codes across all the nodes? Those will work well for sequential I/O, but the overhead of erasure coding with small random writes would be enormous.

More replicas mean our data centers will need more disk drives to hold the same amount of data, which brings us back to how software-defined storage will be good for Western Digital, Toshiba, and of course, Seagate.

Looking to improve the effectiveness of your IT organization and ensure alignment with the business? Check out the two-day workshop Principals of Effective IT Management Sept. 30-Oct 1. at Interop New York.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights