Special Coverage Series

Network Computing

Special Coverage Series

Commentary

Howard Marks
Howard Marks Network Computing Blogger

RAM Caching Vs. SSDs: A Startup's Gamble

Startup Infinio bucks the SSD trend by using RAM caching instead of flash to accelerate NFS storage performance. Is it a good bet?

One of the things I enjoy most about attending Tech Field Day events is when innovative companies are brave enough to make their first public appearances in front of an obstreperous group of bloggers and on live Internet video. The latest to run the TFD gauntlet is Infinio. Its Infinio Accelerator bucks the trend of using SSDs to accelerate NFS storage performance for vSphere hosts, and instead uses server RAM as a cache.

RAM caching isn't a new idea. I remember back in the '80s my friend John P. Davis promoted Multisoft's Super PC-Kwik caching software for MS-DOS. Because the disk drives of the day were pretty slow, an ST-412 spun at 3600RPM and had an average seek time of 85ms, and people still used floppy disks for real work, even a small RAM cache made a big difference in system performance.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

I'm not a big fan of RAM caches for applications like database servers. First is the cost of RAM. If you can get enough RAM into your server for the application and cache using 16-Gbyte DIMMs, your incremental cost will be about $13 per gigabyte; upgrade to 32-Gbyte DIMMs, and the price about doubles to $25 per gigabyte. By contrast, an Intel 910 PCIe SSD will set you back just $5 per gigabyte.

Second, and more significantly, most database engines use RAM to cache data themselves. I assume that because the database engine has more information about the data, it can make better decisions about what data to cache than a simple disk caching solution. The database engine can easily choose to cache indexes and not cache log file writes, while a basic write-through cache wouldn't be able to tell these two types of data apart.

[Worried about SSD failures? It may not be as bad as you think. Howard Marks explains why in "SSDs and the Write Endurance Boogeyman."]

However, things are very different in a virtualized environment, where each VM claims, and holds, as much memory as it can. A common RAM cache that can dynamically allocate cache space to VMs as they make demands for storage access makes a lot more sense here. That's especially true if the caching engine deduplicates the cached data so common data, like common Windows DLLs, are only stored in the cache once.

Infinio Accelerator not only deduplicates data in each server's cache, but treats the cache memory across all the servers running Accelerator in a cluster as a single cache pool and a single deduplication realm.

Infinio Accelerator installs as a virtual caching appliance in each accelerated host and a single management VM that presents the dashboard to manage the Accelerator instances in a cluster. The caching VM could, under heavy load, consume as much as two vCPUs and, by default, claim 1/16th of the server's RAM (a minimum 8 Gbytes) for its cache. The administrator can then assign the NFS datastores (unfortunately only NFS datastores) to be accelerated. Once Accelerator is running, it acts as a write-through cache.

One big advantage Infinio has over its flash competitors is that the company makes it especially easy to sample the product. Installing a typical SSD caching product will first require the installation of a new SSD in the server and then some fiddling with data store configurations to introduce the cache in the data stream. This generally means administrators have to vMotion their workloads off a test server, set up the SSD and cache, and then vMotion them back.

To test Infinio Accelerator, an admin simply has to download the evaluation software and install it (the software is free for 30 days), and the Infinio VM will be created. Because Infinio uses some very clever networking tricks to dynamically intercept NFS traffic it wants to cache at the ESX vmKernel/vSwitch, it can insert the caching engine in the stream without a reboot while VMs are still using the storage.

Infinio Accelerator is currently in limited beta test; a public beta (downloadable from Infinio's home page) is promised for release at VMworld. Real production software should be available later this year. Infinio is also promising SSD support, as a Level 2 cache, for a future release.

I'm impressed with Infinio Accelerator, and looking forward to getting it into the lab soon, but I'm afraid its NFS-only support might limit its potential market. The company hinted at future block storage and Hyper-V support in the TFD presentation, but I'm not holding my breath.

For the full, or almost full, Tech Field Day experience, you can watch the Infinio presentations, and my snarky comments, here.

Does Infinio's approach make sense to you? If you're planning to invest in SSDs for storage acceleration, would you consider RAM caching instead? Use the comment box below and let me know what you think.

Disclaimer: Infinio was a sponsor of Tech Field Day 9. Gestalt IT, the organizer of the Tech Field day events, pays the travel expenses of TFD delegates, including this intrepid reporter.



Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

Research: 2014 State of Server Technology

Research: 2014 State of Server Technology

Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, thatís for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.