Performance Trends Favor Solid-State Storage
But several issues need to be resolved before SSDs gain a prominent place in many storage infrastructures
December 6, 2008
Improving storage performance is a complex task that involves defining and implementing standards, orchestrating test lab and vendor activity, working with leading-edge customers, and then presenting the market with systems that combine value, affordability, and future scalability. The process usually takes time and money. A potential disruption to this established process is solid-state technology.
Solid-state drives (SSDs) can leverage the power of memory and semiconductor technology packaged as disk drives, and they use common interfaces and command structures to remove some traditional storage media bottlenecks. Nobody questions the ability of SSDs to substantially boost performance. But there are still several issues that need to be resolved -- ranging from cost to reliability to standards -- before SSDs gain a prominent place in many storage infrastructures.
"When we look at storage media, there is a desire to improve disk drive performance, and SSDs are a solution. But the market still has to accept price points that go along with reliability and performance," says Wayne Adams, chairman of the Storage Networking Industry Association (SNIA) , who works at EMC Corp. (NYSE: EMC). "Today, SSDs are costly and are therefore restricted to certain niches of the market that require high performance computing, such as telcos, the military, banking, and trading. But over the next 24 months, SSD prices will drop into the midrange, which is being fueled in part by the many SSDs that are being used in consumer devices like laptops and cameras. A number of storage vendors already have product roadmaps that incorporate SSDs, although it is in a different flavor when you are talking about a data center versus the consumer side."
This "different flavor" is in part driven by a shift to more unstructured storage (e.g., files), which creates an environment where more storage systems and applications will demand highly intelligent systems management and richer, more intelligent applications to manage data.
"Along with this trend, we see a second trend, which is a specification of storage media for certain tasks," says Jon Affeld, senior director of product management and development for storage systems vendor BlueArc Corp. Affeld sees a future system architecture where the SSD data center role is steadily broadened from being a "front end" that jumpstarts performance for an array of hard drives, to a dedicated tier of storage in a single system footprint that contains within it both HPC SSD and lower performance, terabyte-sized SATA (serial advanced technology attachment) drives that are ideal for inexpensive data archiving.SNIA concurs. "Given its low latency and the other properties of SSD, we're likely to see a deployment of tiers of SSD both internally and externally in storage," says Vincent Franceschini, SNIA vice chairman, who works at Hitachi Data Systems (HDS) . "SSD is used as a front-end buffer, but also as a low latency source that you know you have as a tier of storage."
If future storage systems in the data center are to be deployed with tiered storage, it will be up to the management systems to automatically determine which data to send where -- with one possible objective being to keep everything within a single system footprint so as to avoid sending data over a LAN. In a deployment of this nature, archived and general data payloads may well reside on SATA, while metadata goes to SSD.
BlueArc's Affeld also sees an economical potential for SSD when it is employed as "rotating" cache for popular files that change in identity from day to day. One example is when an Internet service provider serves up fee-based content. "When a new movie gets released, there is high demand," he notes. "It makes sense to pull the movie off of storage archives and to hold it in SSD cache, on a high-performance computing tier." A second example is in the process of film-making itself, where in a step called rendering, two-dimensional images are transformed into 3D movie simulations that film editors, artists, directors, and others work with as they produce a viewer-ready product.
Of course, not all SSDs are created equal. It is ultimately up to customers to determine which systems bring the breakthrough performance that they are both looking for and willing to pay for.
BlueArc is one vendor that works with several different SSD suppliers and has had to confront this question of best-in-class performance versus the flexibility to competitively choose product from a variety of vendors. "We do support a number of SSD solutions, but we prefer to qualify the product that we support," says Affeld. "We have cross-support agreements with all of our vendors and look for strict conformance to industry standards."A second SSD issue is the current limits to the number of reads and writes you can do on an SSD before it wears out. "Improvements continue to be made in material science and electronics," notes SNIA's Adams, "And a stopgap algorithm is now in place to overcome that."
Meanwhile, SNIA expects to see significant evolution in the storage market over the next 24 months. It will come down to media that are faster and denser, and then to protocols and packaging.
"One of the issues that might be explored is whether the I/O standard areas need to be modified," says SNIA's Franceschini. "There will also be exploration from the standards architecture standpoint on when changes can adapt the environment to SSDs. It's too early to tell which direction this will go. One of things that need to be addressed is ensuring that there is compatibility with existing technologies because new technologies like SSDs are likely to be embedded into what already is deployed."
You May Also Like