SDS And Flash: Better Together

Software-defined storage and flash complement each other in the data center.

Andy Walls

March 3, 2016

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Software-defined storage  is the delivery of various enterprise data storage services and functionality “decoupled” from the underlying hardware. In many SDS use cases, enterprises can download their storage tiering capabilities or copy data management (CDM) solution, for example, load these onto an available server or even virtual machine, and turn them on.

It might appear that by its very nature SDS isn’t inclined to establish strong relationships with its hardware partners, or even care much about them. Both SDS and flash are changing traditional storage architectures, but with outwardly nothing in common, you might think of SDS and flash as strange storage bedfellows. This is far from true.

In fact, flash-based and SDS systems more than complement each other; they can actually enhance, improve, and multiply the advantages each bring to the data center.

Rising together

Flash and SDS adoption pathways provide the obvious first example of how these future-facing technologies are working together.

It’s more than coincidence that today both are experiencing accelerating adoption rates. Before SDS, the deployment of flash storage was sometimes challenging. For example, manually moving data sets to and from flash storage can be slow and labor-intensive. The very notion of this chips away at the intrinsic cost-related value of flash. But add intelligent, automatic SDS tiering functionality and data quickly moves around to the most advantageous storage medium, based on activity levels or other policies you choose. Suddenly the benefits of flash are magnified.  Your applications get the performance they crave when needed, and your budget gets a break when disk or tape works fine.

What’s more, you can combine tiering and virtualization software to extend the useful life and value of existing storage systems, then make a modest investment in flash, and your overall storage becomes both faster and less expensive.  

Total cost of ownership (TCO) analyses that include operational expenses and better application performance favor flash. But where the traditional “dollars per gigabyte” measuring stick prevails, the cost of flash remains a complex and contentious issue. That benchmark is turned on its head when an SDS implementation includes data capacity reduction technologies such as deduplication and/or compression. These can multiply the usable flash storage capacity without adding cost, making flash attractive, even through the lens of $/GB. The fact that the performance improves and latency drops is an added bonus.

Helping each other

Don’t get the idea that this is a one-sided relationship, however. SDS solutions ranging from file and object storage systems to data protection and archiving functions rely heavily on the collection and management of “data about data,” or metadata. Large enterprise-grade file systems that move information around the globe between regional business offices or science labs use file metadata to ensure data integrity and track dozens of file attributes such as location and last update. These file systems store their metadata on separate databases, which can become bottlenecks on overall system performance. But move the metadata stores to flash and these bottlenecks can be eliminated.

In almost every data center, the number of copies of individual data sets is proliferating. Copies are continually made and kept for regulatory, data protection, and disaster recovery, as well as for application development and testing purposes. Every copy uses up storage capacity to the point where more capacity is dedicated to storing copies than for the production data sets themselves. Managing copy data and making it more efficient and less costly is a capability now available as an SDS solution. When flash is added to the copy data management mix, the entire function -- from provisioning new use cases to hunting down stale old data copies across vast, heterogeneous storage systems -- can be significantly enhanced.

Stepping toward the future

Using these and many other SDS-plus-flash architectures, data centers become naturally positioned for a step toward the future and hybrid cloud. Hybrid cloud models keep active data sets on-premises on high performance, highly efficient flash storage and move less active data off-premises into extremely cost-effective, very flexible cloud storage. SDS facilitates hybrid cloud architectures without the headaches and high cost of starting from scratch.

Some may have wrongly assumed that SDS keeps a cool and distant partnership with hardware, leaving flash alone to build its empire. Quite the contrary is true. In fact, these two technologies are kindling quite a complementary relationship. Even more, there’s a growing synergy between flash and SDS that produces better solutions than either does alone, creating much more value for businesses that deploy them together.     

About the Author

Andy Walls

CTO & Chief Architect for Flash Systems, IBMAndy Walls has worked for IBM his entire 33 year career. He was appointed a Distinguished Engineer in 2006 and has been located in San Jose, California, since 1987. In April of 2014, Andy was appointed an IBM Fellow, IBM’s most prestigious technical honor. Andy has worked in storage for most of his career since and is an industry recognized expert in storage systems architecture and NAND flash storage for the enterprise. He is currently the CTO and Chief Architect for IBM FlashSystem and is leading the work on developing the next generation of high performance storage systems. He is known as an innovator and has over 70 patents to his name. He graduated from UC Santa Barbara with a BSEE in 1981.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights