Nimbus Gemini Solid-State Storage Built From the Ground Up

Nimbus rolls out Gemini, the replacement for its S-Class storage system, offering its first set of fully custom hardware, and much more. Get all the details.

Howard Marks

August 24, 2012

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Nimbus Data made the big switch to all solid state about three years ago, when it introduced the S-class unified storage system. The S-Class is an entry level--if you can use that term for any all-flash device--system with a single controller based on a standard server motherboard. This week, Nimbus announced the replacement for the S-Class, dubbed Gemini.

For Gemini, Nimbus has designed its first set of fully custom hardware--a 2U chassis that holds 24 SSDs and one or two controller modules. There are two versions of the controller--one with four QFP ports, which can each have Infiniband or Ethernet personalities, and one with four SFP+ ports for Ethernet or Fibre Channel connections. Each Gemini controller uses a PCIe 3.0 bus both for connections to its partner in the chassis and to connect to multiple SAS interface chips. Each of the 24 SSDs in the system has a dedicated point-to-point connection to the controller CPU without SAS expanders or shared links. The controllers also have an intelligent platform management interface Ethernet port for remote management.

Both controllers run Nimbus's Halo operating system, which provides both block and file access and a full set of storage virtualization features, including space-efficient, high-performance snapshots and data deduplication. All features are included in the base software license.

Most storage systems use battery- or ultra-capacitor-protected RAM in their controllers as their first line of cache. This requires that data be stored in both controllers before writes are acknowledged so data in cache won't be lost in the event of a controller failure. For a system like Gemini that can support 12 GBps (96 Gbps) of throughput, the bandwidth between the controllers could easily become a bottleneck to write performance.

Rather than keep the cache in the controllers, Gemini uses the NVRAM and its 24 SSDs as its cache. Since RAM, and NVRAM, are much faster than flash, especially on writes, this cache empowers the Gemini to deliver latency of just 100 microseconds. By load balancing write traffic across the SSDs and writing cache data to two or more SSDs, Gemini avoids that bottleneck and ends up with a much larger effective cache size than the 8 Gytes or so that a PCIe NVRAM card typically holds.

Nimbus has been building its own SSDs for years. While the company uses the same merchant SSD controller and flash chips as the SSD vendors, building its own SSDs allows Nimbus to tweak the controller firmware to better fit its application and fine tune the balance between NVRAM and flash in each device. The Gemini SSDs have sizeable NVRAM caches and up to 2 Tbytes of flash memory in a 2.5-inch form factor.

Building its own SSDs also helps Nimbus keep its costs down so it can sell Gemini for $8 per gigabyte MSRP. With a custom SSD, Nimbus gets exactly the device it wants while saving the not inconsiderable markup that enterprise SSD vendors charge. Nimbus is also saving some money by using 2Xnm MLC flash rather than the more expensive eMLC flash that can survive many more program/erase cycles, which it used in the S-Class system that Gemini replaces. Controller technology using more powerful DSP and ECC, and the systemwide flash management built into Nimbus's Halo operating system, has made MLC reliable enough for Nimbus to offer a 10-year warranty.

Disclaimer: Nimbus Data is a client of DeepStorage LLC.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights