The Big Enterprise Storage Lie

The storage industry has lived off the deception that a dual-controller design is the best architecture, but it won't be able to fool storage buyers much longer.

Jeramiah Dooley

December 16, 2015

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Some days, the enterprise storage industry looks a lot like professional wrestling. It’s usually exciting, especially when people are foaming at the mouth and hitting each other with folding chairs. There’s action (acquisitions), drama (layoffs), comedy (IPOs) and misdirection (press releases), along with more compelling characters that you can count.

I've watched as storage consumers of all sizes, first on the service provider side, and now in the enterprise market, have finally started realizing what pro wrestling fans have always known: It’s all fake. They have spent years participating in a farce that wasn’t for their benefit, but at their expense, and 2016 is the year when storage buyers start pushing back.

The lie that sustains the existing storage market is simple: It’s that a dual controller design is the right architecture for storage at any layer in today’s marketplace.

Despite the simplicity and straightforward nature of the deception, it’s hidden behind many different layers of misdirection. Vendors argue over who has the best data reduction features, despite there being no appreciable difference in the math being used. Vendors argue over whether you should have all flash or some combination of media. They fight over disruptive upgrades,  the definition of “forklift,” and whether storage is managed as LUNs or volumes or VMs. They come off the top rope, elbows flying, when it comes to marketing, benchmarks, and block sizes, even though the vast majority of customers don’t care about any of it.

With all the noise being generated, you’d think these issues would be the battleground on which storage is being fought, but you’d be very wrong. After all, it’s easy to believe that all that action is real, when the actors are working so hard to deceive you.

Let me put it clearly: Anyone who tries to sell you a dual-controller array to be used as a platform to deliver storage services in 2016 is lying to you. Here’s what you need to know about a dual-controller architecture:

  • Does not provide the amount of computing resources necessary to compete in the race for more and better data services.

  • No ability to deal with the metadata avalanche that is being triggered by the explosion in NAND capacity.

  • Doesn't have the scalability to allow customers to drive meaningful and increasingly critical consolidation into their operational environments. What good does it do to rail against silos, only to have your storage product fit neatly inside of one?

  • Comes with it significant risk, especially during controller upgrades. Remember, it doesn’t matter whether the cost of the controllers is included in the support contract, because the vendor can’t put a pretty marketing program around the fact that there will be someone inside a data center rack, unplugging things and moving things around while all of the IO is being supported by a single controller that has no redundancy.

  • Cannot provide linear scale, even if you put lots of dual controller arrays side-by-side.

The lie used to cover up that last point is one of the most brazen. Think about it from a different perspective: Back in 2005, IT bought bespoke servers for every workload that was running in the data center. Some were large, some were small, but each formed an unmovable boundary for the two resources that the servers were there to provide: CPU and RAM. Because we couldn’t move workloads between servers, and couldn’t move resources from silo to silo, we ended up with a massively inefficient environment where customers had to overbuy and account for peak usage on every application they ran.

This was the status quo until server virtualization showed us a better way: What if we took those independent silos of CPU and RAM and created a scalable pool? What if we changed how we bought servers to account for this scale-out model of consumption? What if, once pooled, we allowed those resources to be allocated (into VMs) and guaranteed (reservations), and we gave customers the ability to balance running workloads across them (resource scheduling)?

IT organizations all over the world won this battle. They changed how vendors sold x86 servers, how the resources those servers contained were managed and how those resources were consumed, forever.

Unfortunately, no one ever looked at storage with the same critical eye. No one ever forced storage to change. After all, a storage array is just two resources as well, performance and capacity. Why is this? Is storage fundamentally different than compute? Is managing performance and compute harder than other resources? Or did we all just fall victim to a fantastic con? Did we see the lights and hear the entrance music and forget that it’s all an act and at the end of the day, we can decide what happens by voting with our wallets?

Don’t get me wrong, I love a good farce as much as the next person, but the show is almost over in storage. I can only imagine how angry longtime legacy storage customers will be when they find out the magnitude of the lie they’ve been sold.

About the Author

Jeramiah Dooley

Jeramiah DooleyCloud Architect, SolidFire

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights