Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

4 Ways SSDs Are Failing Your Applications…and What You Can Do About It

SSDs
(Source: Pixabay)

It's no secret that SSDs (solid-state drives) provide some solid perks: Low latency, high IOPS (input/output operations per second), and high throughput. SSDs have accelerated the performance of many applications throughout the years, mainly when housed inside the server and closer to the CPU (central processing unit)—which has historically been starved of data to process due to slower storage technologies, spinning hard drives, or network latencies.

However, SSDs have struggled to keep up with the growth and evolution of the modern data center. Today's data centers demand significantly higher performance, capacity, and endurance. Time and time, commodity SSDs have fallen tragically short in these critical areas.

Let's examine why this is the case and what you can do about it.

1) Capacity

The fixed size of today's storage devices creates problems adding complexity and cost to evolving workloads: as they grow or change, the only option is to add more drives. If additional drives won't fit in the server, the next step is to add more servers, and if that's not doable, you're out of luck. Now, as workloads expand their reach towards the edge, they are accelerating data generation —especially sensor data—pushing storage infrastructure to its limit.

2) Performance

From a workload perspective, the goal is to push more data through more compute without adding nodes and the associated licensing and hardware costs. In server architectures today, the CPU is doing too much storage processing; as a result, application latency suffers because the CPU can't keep up. Low latency is critical to the user experience of modern applications and even mobile applications. Reserving CPU resources for analytics creates more value for the business, which means that storage processing must be offloaded and done by embedded processors on the drive directly.

3) Maintenance

Flash is a consumable device, especially in the enterprise where workloads wear drives out faster than the refresh lifecycle of servers. In high-write environments, compounding maintenance issues where you must physically replace drives because their performance degrades. This process is not only time-consuming and highly uneconomical, but it also increases the data center's carbon footprint.

4) Economics and footprint

Adding more components to the data center comes with several challenges. Data center space is at a premium, power and cooling are limited, and supply chain constraints are causing significant delays. Even if a company has the resources to scale out, it's impossible to scale out indefinitely since physical space is limited. The cycle becomes more pervasive as time progresses, and workloads demand more from their storage infrastructure. 

What you can do about it

While domain-specific co-processing is a proven approach used by other server subsystems for storage, it has been a slow journey to offload the CPU. We've seen this with GPUs (graphics processing units) and specialized processing around AI acceleration. Networking is another example: By putting a processor on the networking interface, network bandwidth utilization improves—it's just another way to accelerate the entire infrastructure. In the context of flash storage, onboard processing enables what's called accelerated SSDs, or Computational Storage.

Moving processing to a local device makes it possible to offload the server CPU, optimizing resources and reducing overall cost. By adding an embedded processor to every drive, companies can scale in, optimizing server architectures for modern applications and reclaiming the full potential of flash technology. By doing this, you can expect these critical benefits out of the box:

Capacity: Onboard processing performs transparent, in-line compression without compromise. If you're currently consuming CPU cycles to save on storage space, you can get the storage savings you need without wasting valuable CPU cycles on compression. Turn off host compression, let the drives take care of it, and let the CPU focus on creating value for your critical applications.

Endurance: Drives last longer by compressing data instead of only writing it. By using compression, companies can better align their server refresh cycles with drive refresh cycles because their drives last longer.

Performance: When data compresses and endurance expand, you would think that performance might suffer, but the opposite is true. By writing less, you're allowing for more read cycles. In real-world environments, this lets companies enjoy higher performance and lower latency, helping them exceed application SLAs (service level agreements).

Efficiency and Savings: Companies must use every component in their infrastructure as efficiently as possible. A smaller footprint means less power and cooling requirements and more work per watt and work per second possible. Most importantly, compressing data right at the storage tier lowers the cost per GB stored. Essentially, the data becomes the source of optimization in the infrastructure.

Commodity SSDs continue to struggle to keep up with the demands of the modern data center. Thankfully, by adopting accelerated SSDs (or Computational Storage), it's possible to overcome these challenges and reap the benefits of increased capacity, endurance, performance, efficiency, and savings.

Dr. Hao Zhong is co-founder and CEO of ScaleFlux.

Related articles: