The VMware Write Cache Challenge, Solved
Server-side write caching can help improve storage network performance. Here's what you need to know.
January 31, 2013
Implementing solid state disk (SSD) in a VMware host server has become a popular option to maximize the return on solid state investment. Server-side SSD eliminates potential bottleneck of the storage network and places high-performance storage directly in the hosts that need it most.
Caching is the most common choice to add to the server-side SSD to automate the use of this high-performance tier. Caching automatically moves the most active portion of data from the SAN or local DAS storage and into the SSD in the server. This means that subsequent reads actually come from the SSD in the server, not from mechanical hard drives on the other side of the storage network.
Most caching solutions that operate in the VMware environment are read only, meaning that the write portion of I/O drops through the cache and onto the intended storage medium -- in most cases, a shared hard disk. The good news is that most virtualized server environments are slightly more read-heavy; 60% reads are common. Also, when those writes do occur, they essentially get unfettered access to the storage network, so, in theory, write performance should improve even with read-only caching.
[ Flash storage is ready for both sides of the I/O stream. Read more at Flash Storage: Ready For Writes. ]
Still, given those numbers, 40% of I/O is still a significant amount of writes, and writes are generally slower. As discussed in "Can You Trust VDI Benchmarks?" after the initial boot storm is complete, most virtual desktop infrastructures (VDI) become more write-heavy. In other words, write performance is important to both virtual server and virtual desktop environments.
How Caches Write
To understand the challenge with write caching, it's helpful to compare the processes. Without caching, when an application needs to write data, that application sends the write to the locally or SAN-attached storage. The application then waits for an acknowledgement from the storage device that the data has been safely written before moving on to the next operation.
A read cache, similar to no caching, passes the write directly to the hard disk. Alternatively, a read cache can capture the write while simultaneously writing it to the hard drive. This cache alternative is known as write-through caching, and has the advantage of pre-caching popular data. The theory being what was most recently written is most likely to be read next. This technique helps get the right data in the cache sooner for more accurate read results, but it does not help with write performance. In both cases, though, the data is safe on hard disk and the cache is always in sync with it.
The Write Risk
In both of these read-cache implementation methods, there is little risk of data loss if the cache fails. VMware does change how read caches should be repopulated, but no data should be at risk. Write caching, known as write-back caching, stores the application write request in the cache and acknowledges a successful write prior to that data being stored on the hard disk. There is no latency waiting for the hard disk to receive data.
While there can be tremendous performance advantages to server-side write caching, there's risk as well. For example, if there is a cache or server failure, then data will be lost. In a VMware environment, there is also the issue of how to handle the virtual machine migration. In my next column I'll discuss some of the ways that vendors are working around the write risk to provide high performance in a reliable fashion.
Our four business scenarios show how to improve disaster recovery, boost disk utilization and speed performance. Also in the new, all-digital Storage Virtualization Gets Real issue of InformationWeek SMB: While Intel remains the biggest manufacturer of chips in the world, the next few years will prove vexing for the company. (Free registration required.)
About the Author
You May Also Like