The adoption of flash-based solid state disk (SSD) is well underway in the enterprise, and it is quickly becoming the go-to option for solving storage performance problems. For the most part, use of SSD has been limited to databases, VDI and Web 2.0 environments. Caching technologies have further helped adoption in the virtualized server market, but the reality of a pure solid state data center still seems far down the road. Software-defined storage may be the key to getting there.
As I've discussed in previous columns, software-defined storage removes storage controller functions such as volume management, RAID and snapshots from physical storage hardware and places it into either a standalone or a virtual appliance. This allows hardware from virtually any vendor to combine with a common set of services. Further, I believe that eventually we will get to a point where even the software components could be interchanged. Imagine mixing NAS capabilities from one vendor with block services from another, and object services from yet another.
It is the mixture of hardware -- and potentially, software -- that should be a catalyst for the growth of flash-based SSD appliances.
[ For more on software-defined storage, see Software-Defined Storage Vs. Traditional Storage Systems. ]
The Flash Paradox
Flash presents a challenge: There is virtually no latency, so adding any capability beyond the very basics can impact performance. That means that you can either build a lean and mean flash appliance with maximum performance but virtually no storage services, or a more enterprise-focused system loaded with features that drag performance.
The legacy vendors have struggled with this problem. While their flash-mixed systems deliver a performance improvement, they are not in the same league as dedicated flash appliances. At the same time, startups that can deliver maximum performance lack features that the enterprise may deem vital.
The Software-Defined Difference
Software-defined storage solves much of the performance problem for flash appliance vendors, enabling them to pair hardware with software to deliver the features enterprises need. If an enterprise uses an application that will be adversely impacted by the latency of a heavy storage feature set, then the appliance can be segmented, allocating a portion of its storage to that application and the rest to the software-defined architecture, for workloads less affected by extra latency.
Ideally, software-defined storage will mature to no longer be an all-or-nothing proposition. For example, suppose you want to leverage the replication or mirroring feature of an SDS solution, but not the thin provisioning and snapshots. You want to be able to make a lightweight version of SDS per storage device.
Enabling Server-Side SSD
Going further and potentially crossing into the hyperscale architectures is moving storage software and hardware into the servers themselves and essentially creating a convergence of compute, network and storage. Typically, software-defined solutions that run in this mode can then aggregate storage devices from all the servers into a single clustered file system. With 2.5" SSD now approaching 2 TBs in capacity, creating an all-flash, high-performance, high-capacity, converged environment by leveraging SDS is a reality.
Software-Defined Hardware Pricing
In either of these situations, common practice is to charge a premium for the hardware because the value-added software will go away. This is especially good news in the SSD market, where the percentage of markup can be prohibitive. In the server-side SDS model, the cost of a flash deployment could come down exponentially.
Software-defined storage does open up another can of worms: There has been a debate in the industry for some time over where and how flash management functions should take place. Should they run as software, potentially on the server host, or as hardware via a FPGA (field programmable gate array) or as a piece of silicon? We'll dive into this topic in my next column.