SSD Onslaught Spotlights Defrag Debate
Let's stipulate that the proliferation of solid-state drives is a fact, and that, based on their high performance, SSDs are in the process of rapidly moving from a high-priced consumer curiosity into an increasingly popular data-center storage technology. So now the big questions revolve around dealing with a whole new set of disk "gotchas."
October 7, 2009
Let's stipulate that the proliferation of solid-state drives is a fact, and that, based on their high performance, SSDs are in the process of rapidly moving from a high-priced consumer curiosity into an increasingly popular data-center storage technology. So now the big questions revolve around dealing with a whole new set of disk "gotchas."
For moving platters, things have long been well understood and we're all comfortable. You figure out your RAID implementation and then select a defragmentation technology, either via scheduling in Windows or using an external defrag package. Of course, the main worry with conventional storage remains head crashes, but then again that's why we've got RAID data striping.
With SSDs -- and I don't mean to be flip, but play word association and see if I'm not correct -- the dominant concern shifts from head crashes to cost. As in, I know this stuff will be killer, but, boy, does this speed still come at a price. OK, I know the reason we're able to stipulate that SSDs are beginning to proliferate is that these prices are rapidly dropping. Indeed, Ryan Petersen, CEO of memory and SSD maker OCZ Technology, was recently quoted as predicting that SSDs would catch up to conventional hard drives in price and capacity by late 2011 or early 2012.
For now, and for some time, though, one will want to be selective about SSD usage. Indeed, as NetworkComputing blogger George Crump recently noted you want to link your SSD usage to those apps which can best take advantage of its speed.
Which brings me to the "debate" point of this post: To defragment or not to defragment? That is the question. There has been lots of online type devoted to the argument that SSD defragmentation is a waste of time. The gist of this argument is essentially that any SSD memory cell can be accessed as quickly as any other memory cell, because you don't have to physically move a head across a platter. Thus, defragmentation is unnecessary.
The second leg of this argument is that SSD cells have a limited amount of writes before they become fried, and therefore defragging actually shortens the life of your drive.
Most notably, Intel itself says you don't need to defragment, and does so very publicly, on its SSD FAQ. ("SSD devices, unlike traditional HDDs, see no performance benefit from traditional HDD defragmentation tools.")
On the other side of the argument (and I'm summarizing), SSD defrag proponents say that you can indeed boost performance because defragmentation obviates a bunch of time-sucking erase-write cycles. I.e., if you clean your SSD out, then a program won't be hitting a sector where it has to erase many of the memory cells before it can actually write the desired data. It can do thousands of writes, which are incrementally faster (and, maybe, a whole lot really faster, too) than erase-write, erase-write, repeat.
Perhaps the biggest public proponent of SSD defragmentation is Diskeeper Inc., which makes an SSD defrag product called HyperFast. Howard Butler, senior technical engineer at Diskeeper, recently came by to visit to talk about the impending introduction of Diskeeper 10.
During our chat, Butler also made the case for HyperFast. "People would assume that fragmentation is related solely to the mechanical moving pieces of a drive," he said. "That was a false assumption because Windows -- more specifically, the NTFS file system -- doesn't treat files any differently on an SSD. It just sees it as a device."
Butler says that studies performed on HyperFast showed that it could deliver a 3X performance increase on reads, for an SSD as compared with a mechanical drive. "People though you couldn't do [that]," he said, "Because there are no seek or mechanical rotation issues. But it occurred because Windows did not have to issue multiple read requests." He also claims a 20x improvement on writes.
For me, perhaps the most impactful factor in this debate comes from the SSD vendors themselves. It's not what they say -- as noted in the Intel FAQ above, they're not big public proponents. Yet actions speak louder than words, and one can see what amounts to something akin to support for defrag technology.
The canonical case occurred back in February, 2009, when Intel released a firmware update for its X-25 series SSDs. What happened was that performance of these drives became degraded (aka) over time as sectors filled up. As best as I can tell, the firmware update revised the wear-leveling algorithms which were performed in the background. Now, wear-leveling is not specifically defragmentation. Rather, it's monitoring to make sure you're spreading the write-cycles broadly across all the cells, so some of them don't hit their end-of-life limit way early than others. (I think it's kind of cool how SSDs have their own version what we have, where our cells will no longer duplicate.)
Now, as I said, wear-leveling isn't defragmenting. On the other hand, it seems like it could clearly conflict with background defragmentation, especially that conducted under control of a third-party program, especially when you (manufacturer's firmware) are doing garbage-collection as well.
Speaking of which, the last data point I'll leave you with is from OCZ, which has for download a utility called "user-initiated garbage collection, helps restore performance on Vertex, Vertex Turbo, Vertex EX, and Agility." Background collection is another name for defragmentation done during idle time.
So maybe what the SSD vendors are telling us is not that defragmentation is bad, or completely unnecessary, or that it doesn't work. (Though why they overtly say it's a waste of time, when they're performing cell wear-leveling and garbage collection, is a mystery. Or disengenous.) Maybe what they're saying is that they'd rather take care of this stuff all by themselves.
P.S. As a final aside, here's an interesting thread on Intel's forums, on Hyperfast and Intel's SSDs. This response was posted by a person who identifies himself as Michael Materie of Diskeeper: "HyperFast is designed to improve write performance on SSDs. On the SSDs for which it is applicable, it offers a great performance gain. That said, I've personally tested the Intel drives and, unless you have really unusual usage, you won't need to optimize them with this tool."
Follow me on Twitter. Write to me at [email protected].
About the Author
You May Also Like