Software-Defined Storage Vs. Software-Delivered Storage
The difference between a software-defined storage product and one that's software delivered has more to do a vendor's go-to-market strategy than technical features.
February 25, 2015
Software-defined storage is a perfect example, like cloud, of how we in the IT industry can take a useful term and stretch it to cover so many unrelated use cases that it becomes as meaningless as whatchamacallit or thingamabob. When the vendor that three short years ago bragged about how its hardware was the best thing since 5 ¼” floppy disks starts calling the diskless version of its old hardware storage system “software defined,” you know just about anything goes.
Until recently, storage vendors like BlueArc and 3Par had to use custom ASICs to deliver high-performance storage. Intel, empowered by the invisible hand of Moore’s law, has packed enough power into the Xeon processors to let today’s storage designers skip the ASIC and do all their work in software.
Even storage giant EMC is down to one custom ASIC across its entire product line. Of course, just using an x86 processor doesn’t make software defined storage; I reserve the term for storage systems that run on industry-standard x86 servers. Most of the latest generation of storage vendors, including Pure Storage, Nimble, and NexGen basically use off-the-shelf hardware, and so can legitimately call their products software-defined storage.
The orthodox among you may insist that to be software defined, a storage product has to be sold as software. A few extremists go so far as to insist that only software products that turn servers into a hyper-converged cluster deserve to be called software defined.
I do differentiate between software-defined storage appliances and software-delivered storage, but I see the difference more as a go-to-market decision by the vendor than an important technical distinction. Vendors choose to bring their cool storage software to market wrapped in Xeon processors and tin as an appliance for three basic reasons:
Higher margins
Reduced testing and support costs
Hardware-based enhancements
A quick look at the prices on storage systems and the servers that comprise them make it clear that most of the value in a storage appliance is in the software. For example, Tintri’s T820 is a hybrid storage system in a storage bridge bay (2 x86 servers with shared access to SAS backplane) cabinet that has an MSRP of $74,000. I could buy roughly equivalent hardware for around $20,000, which puts a value over $50,000 on Tintri’s software. Since customers won’t spend $35,000 or more for software alone, wrapping that software in tin lets Tintri charge more for it.
As the entire population of Redmond will be glad to tell you, building system-level software that will run on a wide assortment of hardware is hard. At least part of the Macintosh’s supposed stability advantage over Windows is due to the sheer number of hardware configurations that Windows runs on.
By selling appliances, a vendor vastly reduces the number of drivers it must tweak and hardware configurations it must test. This both reduces costs and brings its product to market faster. It also eliminates support calls from customers who have chosen inappropriate hardware, can’t manage to assemble the parts kit they ordered from NewEgg, and those like the poor guy on Reddit who discovered VSAN and Dell’s H310 SAS HBA didn’t work together, even though the H310 was on the VSAN hardware compatibility list.
Shipping hardware also allows a vendor to include custom, or at least uncommon, hardware in a model I call x86+. Many storage systems, including the Tintri T820, add non-volatile RAM (NVRAM) to use as a write cache. Vendors can buy NVRAM DIMMs or PCIe NVRAM cards from OEM vendors like Curtis-Wright (yes, that Curtis-Wright that made the engine for The Spirit of St. Louis, Marvell, and NetList). These systems add a power failure detection circuit and flash to standard DRAM along with an ultracapacitor to power the process of dumping the DRAM’s contents to the flash in the event of a power failure.
While NVRAM solutions are available off the shelf, they require integration along with BIOS and motherboard support beyond what an enterprise customer could be expected to manage, giving an appliance that uses NVRAM a performance advantage over a software-only solution.
The latest x86+ trick is for vendors to stick any custom hardware they use onto a PCIe card. They can limit their hardware engineering to the one PCIe card, take advantage of server technology upgrades and of course call their product software-defined storage, though I cringe just a little when they do. The best example of this is Simplivity’s card that holds both its NVRAM and deduplication offload engine.
Whether wrapped in tin or downloaded over the Internet, software is what defines all the cool storage today. Which product you should use in your data center should be based on more than just the vendor’s go-to-market model.
Attend the live workshop, Making Cloud Storage Work for Your Organization, given by storage guru Howard Marks at Interop Las Vegas this spring. Don't miss out! Register now for Interop, April 27 to May 1, and receive $200 off.
About the Author
You May Also Like