Is Software-Defined Anything Worth the Price?

"Software-defined" and "converged" are the flavors of the day, but companies might be better off spending their data center dollars on cold, hard, ultra-fast gear.

Jim O'Reilly

August 1, 2013

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

The last year has seen a dictionary of new buzzwords in high tech. We have "converged" everything, from servers and storage to networks and servers. Then we have "software-defined," which has coined a load of acronyms under the umbrella "SDx." Like -aaS, these acronyms seem to spawn on a daily basis, but it’s hard to tie down real definitions.

Sometimes such activity is a sign of real innovation, or at least a need for it, but there’s a sense that this is maybe a move to protect territory by the larger established vendors.

Let’s try converged storage. The idea is using storage processor cores to act as virtualized server engines, and you might ask why not. The cores are really x86 CPU cores, so the virtualization code should run OK.

But structurally it is the same as DAS, where storage is hooked onto a server. The description could as easily have been "servers with attached storage" using "storageware inside a VM." These are products we understand well, and that are evolving nicely to handle SSD and big-data. Heck, they even have clustered versions.

The strongest proponents of the idea make big storage arrays that are feeling the challenge of SSDs, cheap bulk drives, and cloud storage They may succeed in morphing to full systems, but server companies know best how to make servers.

The same is true of the networking field. Adding value to already expensive switches and routers has led to the term "converged" network. Same issues... Same conclusion!

This is where we really started to see a burst of new buzzwords, though. Terms like "software-defined storage," "software-defined networks," and "software-defined data centers" are being bandied about. Add "data planes" and "control planes." And life is now much more complex.

A good reality test is to ask the pundits and experts for a definition of the terms. We think we understand converged (well, almost... see below), but questions about software-defined typically generate an explosion of five–syllable words that don’t actually say much.

I’ve only tracked down one real problem addressed by SDN, and that’s the issue of massive network scale-out. There are addressing and security issues involved in large-scale setups that strain current approaches in areas such as table sizes and security zoning and crash recovery. But the very few CSPs facing the problem seem to have reached solutions without creating a new class of networking gear.

There are claims for other benefits, including "bottleneck identification," "cross-vendor management," "path optimization," "on-demand security apps," etc. But these mainly seem to be required as an alternative to pure bandwidth. I’m reminded of a BBC Top Gear show, where a 625 HP muscle-car took on a 500 HP super-car with more computers than a Boeing 787. Guess which won!

In networking (including storage interfaces) we are moving rapidly from 1 GE and 8 Gbit/s FC to multiple-link 40/100 GE or IB links. Issues like bottlenecks should disappear into the future.

Is the SDx complexity worth the expense? Or should those precious dollars just be spent on much faster gear? There are some things in this buzzword explosion that really have value.

Converged-protocol storage, meaning boxes that can run multiple protocols, is timely. It ends arguments about blockIO versus NAS. It also means the demise of Fibre-Channel SANs over the next few years, as Ethernet wins the connection battle on sheer horsepower.

When you dig through the "data planes," etc. on EMC’s new ViPR, it’s a converged-protocol front-end, capable of delivering all the protocols and interfacing most of the storage boxes ever made. But there’s a nagging problem.

The basic spec reads just like Ceph running on a white-box server, and that will be way, way cheaper than ViPR. So, for differentiation, ViPR will be loaded up with software features that cost an arm and a leg, while Ceph will just have raw, cheap horsepower.

You might just want to look carefully at an openware muscle system instead of complex code.

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights