As solid state storage fills up and unwritten cells to write data to become scarce, it has to first erase data from old cells then write the new data. This leads to a performance degradation. Some systems have routines that execute during idle times to clean up unused cells ahead of time, maintaining performance.
The better these processes
are, and the more available cells they have to work with, the more
performance will stay consistent. The second performance issue may be in
the storage systems that the solid state storage is installed into.
Legacy storage systems were designed for highly latent mechanical
drives. These systems need to be re-wired or replaced with systems
designed for latent free solid state storage.
Reliability is a top concern for solid state storage manufacturers ,not surprisingly, for their potential customers. Most solid state storage vendors have taken great strides to deal with leveling issues, basically insuring that all the cells in the module are written to in an even fashion. The general consensus shared by most at the Flash Memory Summit was that you can expect 3-5 years out of a solid state storage device, similar to hard disk technology. The real issue is how to protect the data in these systems when there is a drive failure.
The write overhead
of traditional RAID can be an issue here, and the cost of mirroring a
large set of solid state disks could be prohibitive. There is a need for
a new data protection method designed specifically for solid state
storage. There are also new technologies or processes coming to market that will
increase flash reliability. Look for vendors to move beyond basic ECC
correction to improve flash reliability.
Assuming that performance and reliability is at least reasonable, the most important factor is price. Why did most SAN infrastructures upgrade from 2GB Fibre Channel to 4GB FC? It became the same price or cheaper than 2GB. The same will hold true for solid state storage. The moment you can buy an SSD array for the same price you can buy a standard mechanical drive array, you will.
Reaching a price point that is acceptable to the data center is going to take more than a price plummet in flash-based solid state. A key issue is density. How much solid state memory can fit into a single unit? Again, this may be a challenge for legacy-based systems that use the drive form factor. SSDs are not as space efficient as a system loaded up solely with memory modules. Legacy systems have shelves designed for hard drives, where more space is needed to address vibration and heat. The performance and reliability advantages that certain solid state vendors will develop will help the number of early end user adopters to increase and will help those solid state technology companies establish a foothold in the market.
Another factor is time and the realities of the IT infrastructure. Despite our goal of a dynamic, flexible data center, reality is that the infrastructure changes quite slowly. Flash storage has only been available in business ready packaging for a little over three years, enterprise ready packing is just now becoming available. Given enough time, the improvement in reliability, better designed systems to take advantage of solid state storage and continuation of solids states price decline, will lead to a data center where most of the storage no longer spins. It just won't be next year or the year after that.