No longer is the business mantra merely to just "Do More With Less." It's more like "Do More With Nothing!" In the midst of a Category Five economic hurricane, IT budgets are being cast asunder while IT managers are still expected to shrink the data center footprint, consume less power, reduce management overhead, cut capital budgets, and shave operating expenditures. To further compound the pain, with each successful cost cutting initiative successfully implemented, IT planners have fewer viable choices to continue meeting business edicts over time.
Fortunately there are still some creative ways for IT planners to gather some low-hanging, cost-cutting fruit while actually gaining significant performance enhancements that present no risk to the business. One answer is solid-state disks (SSDs), once a cost-prohibitive way to store data that now can help IT organizations drive big performance gains while actually lowering costs.
Mechanical disk drive technology has served as the primary storage medium for decades. Whether businesses purchased internal, direct-attached, or network-based storage for their applications, the basic architectural model was the same: Provide some sort of cache/silicon-based memory frontend (whether internal to the server, onboard a disk controller, or a special reserve inside an array) to a larger mechanical disk storage pool at the backend. The idea is to try and keep real-time data inside cache memory long enough so that application response wouldn't suffer from the delays incurred waiting for I/Os from spinning disks.
The challenge here is that cache represents a very small percentage of the overall storage backend. As a result, storage and application administrators need to continually tune and reconfigure systems to ensure the most frequently accessed data sets stay in cache, or at the very least can be accessed quickly off of disk by striping data sets across a large number of very fast and expensive disk drives.
There are several techniques or workarounds that application and system administrators generally employ to overcome cache memory constraints. In short, they include configuring large pools of high-speed (15,000 rpm) Fibre Channel drives in conjunction with multiple high-performance servers running multi-threaded applications.