Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

SolidFire Aims SSD System At Cloud Providers

With the number of acquisitions in the storage business lately, you’d expect that we were in a mature market with declining product differentiation and innovation, but, from where I sit, nothing could be further from the truth. As fast as Dell and EMC can hoover them up, startup vendors keep coming up with new ideas and solutions aimed at the specific needs of targeted markets. SolidFire’s new all-SSD, scale-out, 10-Gbps iSCSI array that’s targeted directly at the cloud storage provider market is a case in point.

Since the SolidFire Storage Solution was designed for the cloud provider market, it came as no surprise to me that its design and feature set fit that user community’s needs especially well. To start, since the SolidFire system is a scale-out storage system based on industry-standard 1U servers and solid-state drives (SSDs), providers can buy capacity--and, of course, performance--as they add customers. A single system can have as many as 100 SF3010 storage nodes, each of which has 10 300-GByte SSDs, and the cluster can support as many as 100,000 volumes.

While enterprise users standardized long ago on highly reliable storage systems using multicontroller NAS systems and Fibre Channel SANs, those solutions have been too expensive for most cloud providers. Instead, they’ve built clustered file systems and object stores that scale out. It will be easier to insert SolidFire’s iSCSI solution into those data centers than competing solutions from Texas Memory Systems or Violin Memory that are Fibre Channel-based. Being based on iSCSI also simplifies scale out, allowing SolidFire to use IP redirects to send I/O requests to a node that holds the data chunk being accessed.

I was impressed by how SolidFire takes advantage of the dual six-core processors in each node to do inline deduplication and compression to squeeze more data into the expensive SSDs. As data is written to the system, it’s broken into 4K chunks and hashed; data chunks are compressed and stored in what is essentially content-addressable storage where data chunks are stored by their hashes; and a volume store tracks chunk locations and maps chunks to user volumes and blocks.

Of course, whether users will really get 10-TBytes of usable space out of the 3 TBytes of SSD in a node depends on how well their data will deduplicate and compress. Some cloud provider data, like virtual machine system drives, will deduplicate very well. MySQL databases will compress pretty well, so SolidFire’s projection might be on the money. It will depend on the user’s data. SolidFire will distribute a free data scanner that prospective customers can run against their data to see how well SolidFire’s techniques will reduce it.

  • 1