Early solid state deployments amounted to little more than brute force replacements of mechanical disks with solid state ones--that is, SSDs: those now widespread kludges that take a bunch of memory chips no more than a millimeter or two thick and package them in a comparatively hulking 2.5-inch hard disk form factor. SSDs are an understandable evolution since they fit right into existing systems, requiring no mechanical or electrical changes, but they are far from an optimal use of semiconductor technology. Another popular option--server-side flash cards--still hews to existing electro-mechanical standards--in this case, PCIe, but in a somewhat more space-efficient form factor and using a much higher performing interface.
But in the age of virtual servers, server-side flash, which is inherently tied to specific physical systems, doesn't fit with the canonical VM storage architecture using consolidated, networked systems to feed entire server farms. Enter all flash memory arrays. But here, too, the HDD legacy is hard to shake since virtually all of them--even products from pure flash players like Nimbus, SolidFire and Xtreme I/O--use SSDs. However, a few innovative companies such as Violin Memory and Skyera buck the trend with custom DRAM- and flash-optimized designs that achieve extreme performance and density.
Violin's new 6264 array was the prime attraction at its VMworld booth. The hardware is sterling, packing 64 Tbytes of raw flash capacity in a 3U package--density made possible by using the latest 19nm NAND chips on four rows of custom-designed memory modules stacked 16 wide. The system delivers 1 million IOPS with sub-100 microsecond latency, with no single points of failure using a proprietary data striping mechanism (vRAID), multiple controller modules, dual memory gateways and up to four modular I/O interfaces. Indeed, the system uses pluggable I/O cards that can be configured as either 8x8 Gbps Fibre Channel, 8 x 10 Gigabit Ethernet, 8 x 40 Gbps Infiniband or 4x PCIe (Gen2 with eight lanes each).
[Read how a VMworld panel exposed a rift between startups focused on cloud and mobile technologies and old IT that clings to its data centers in "The IT Generation Gap."]
At just over 20 Tbytes per rack unit, the Violin system is impressive. However Skyera takes solid state density and component packing to an entirely new level. The company, which emerged from stealth mode in June 2012, completely ditches the notion of SSDs, or even swappable memory modules like Violin's design. Instead, it builds the 1U equivalent of a networked SSD. Prior to VMworld, Skyera released its second-generation skyEagle product that sports 250 Tbytes--on its way to 500 Tbytes--of all-flash capacity and either 16 10 Gigabit Ethernet or 16 Gbps Fibre Channel interfaces supporting a SAN/NAS storage protocol stew of FC, iSCSI, NFS and CIFS. All this is priced at under $2 per gigabyte, which still adds up when you're talking terabytes.
But usable capacity is actually much higher. As my colleague Howard Marks points out, "All Skyera systems use data reduction technology to extend the life of the flash by reducing writes and increasing capacity. For a typical application mix, users will get well over a petabyte of useable capacity from the 500-Tbyte model--even after the overhead for data protection."
2:1 compression turns it into a dollar-per-gigabyte system, meaning an entry-level SkyEagle provides about 120 Tbyyes of effective solid state capacity for about $60,000. But as a pure solid state array, it's also delivering millions of IOPs; Skyera claims 5 million, but, even accounting for vendor hyperbole, it's several times what Violin delivers and up to an order of magnitude faster than something from Pure Storage.
Pushing the storage performance envelope using high-end boxes designed to slot into a typical centralized storage pool is still expensive, even at Skyera's $1 to $2 per gigabyte price point. A more efficient option for boosting storage performance, at least for network file shares, takes a different approach: Decouple capacity from I/O throughput. Avere has developed just such an architecture for NAS performance optimization using a time-honored technique--caching-- with a few innovative additions. The company's so-called "edge filers" are essentially scale-out hybrid DRAM and flash and/or SAS appliances (low-end devices use HDD; high-end systems use SSD) that sit between storage consumers--applications and end users--and NAS filers.
The boxes include up to 144 Gbytes DRAM (with 2 Gbytes NVRAM to protect data still unwritten to disk in the case of a power failure) and a maximum of 3 Tbytes flash, along with two 10-Gbit and six 1- Gbit Ethernet ports. The system acts as a read/write cache, with hot data going to RAM with the flash (or HDD) holding warm, less frequently accessed information. All changes to data within the Avere storage tier is written back to the NAS shares. The result is a two-tiered storage architecture that allows independently scaling storage performance and capacity.
Need more space? Scale out your NAS box or add an entire new system. Need faster performance or a larger hot working set? Add another Avere appliance.
Avere co-founder and CEO Ron Bianchinni says that the company often sees 98% hit rates (49 out of 50 read requests are in the Avere cache), meaning that, properly sized, the entire storage system starts to look like one big solid state disk. Sizing is another area where Avere offers great flexibility. Using a distributed file system (many of Avere's core technology team worked on the original Andrew file system at CMU) allows the cache cluster to scale up to 50 nodes, providing an aggregate millions of IOPs of performance and tens of GB/sec of throughput.
Its file system also supports a global namespace across heterogeneous NAS systems--meaning multiple, or even dozens, of NAS shares can be exposed as a single mount point. Thus, for file access, Avere provides a means of storage virtualization. The combination of caching acceleration and a single namespace makes Avere appliances ideal for remote offices, giving users blazingly fast performance while virtually eliminating IT administrative overhead since the core NAS filers and access controls are all on central systems.
While software-defined storage--namely, the ability to abstract physical storage resources, whether NAS shares, block devices or physical LUNs, into logical storage pools--is key to building cloud services, equally important is optimizing storage performance for applications and users that increasingly demand both speed and capacity. Whether through hybrid storage arrays like those from Dell EqualLogic, Nexsan or Nimble; tiered architectures like Avere's solid state caching appliances; or, for raw performance, storage systems designed and optimized for flash like those from Skyera and Violin, the tools for vastly improving storage performance while still keeping up with rapidly growing capacity needs have never been better nor more diverse.
Interested in how flash-based SSDs work and the various ways they can be deployed? Check out Howard Marks' session "SSDs In The Data Center" at Interop New York this October.