Kurt Marko

Contributing Editor


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Finding New Ways To Boost Storage Performance

Despite the perpetual need for extra capacity--a consequence of incessant growth of the data enterprises produce and manage--the real innovation in storage is happening along other technology vectors: virtualization, automation, data management and performance. And when talking performance, that means flash. Although solid state technology has seeped into storage systems large and small, it's used more often as a hard drive replacement than as a building block for innovative architectures. Yet several storage products at last week's VMworld displayed refreshing, out-of-the-box thinking.

Early solid state deployments amounted to little more than brute force replacements of mechanical disks with solid state ones--that is, SSDs: those now widespread kludges that take a bunch of memory chips no more than a millimeter or two thick and package them in a comparatively hulking 2.5-inch hard disk form factor. SSDs are an understandable evolution since they fit right into existing systems, requiring no mechanical or electrical changes, but they are far from an optimal use of semiconductor technology. Another popular option--server-side flash cards--still hews to existing electro-mechanical standards--in this case, PCIe, but in a somewhat more space-efficient form factor and using a much higher performing interface.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

But in the age of virtual servers, server-side flash, which is inherently tied to specific physical systems, doesn't fit with the canonical VM storage architecture using consolidated, networked systems to feed entire server farms. Enter all flash memory arrays. But here, too, the HDD legacy is hard to shake since virtually all of them--even products from pure flash players like Nimbus, SolidFire and Xtreme I/O--use SSDs. However, a few innovative companies such as Violin Memory and Skyera buck the trend with custom DRAM- and flash-optimized designs that achieve extreme performance and density.

Violin's new 6264 array was the prime attraction at its VMworld booth. The hardware is sterling, packing 64 Tbytes of raw flash capacity in a 3U package--density made possible by using the latest 19nm NAND chips on four rows of custom-designed memory modules stacked 16 wide. The system delivers 1 million IOPS with sub-100 microsecond latency, with no single points of failure using a proprietary data striping mechanism (vRAID), multiple controller modules, dual memory gateways and up to four modular I/O interfaces. Indeed, the system uses pluggable I/O cards that can be configured as either 8x8 Gbps Fibre Channel, 8 x 10 Gigabit Ethernet, 8 x 40 Gbps Infiniband or 4x PCIe (Gen2 with eight lanes each).

[Read how a VMworld panel exposed a rift between startups focused on cloud and mobile technologies and old IT that clings to its data centers in "The IT Generation Gap."]

At just over 20 Tbytes per rack unit, the Violin system is impressive. However Skyera takes solid state density and component packing to an entirely new level. The company, which emerged from stealth mode in June 2012, completely ditches the notion of SSDs, or even swappable memory modules like Violin's design. Instead, it builds the 1U equivalent of a networked SSD. Prior to VMworld, Skyera released its second-generation skyEagle product that sports 250 Tbytes--on its way to 500 Tbytes--of all-flash capacity and either 16 10 Gigabit Ethernet or 16 Gbps Fibre Channel interfaces supporting a SAN/NAS storage protocol stew of FC, iSCSI, NFS and CIFS. All this is priced at under $2 per gigabyte, which still adds up when you're talking terabytes.

But usable capacity is actually much higher. As my colleague Howard Marks points out, "All Skyera systems use data reduction technology to extend the life of the flash by reducing writes and increasing capacity. For a typical application mix, users will get well over a petabyte of useable capacity from the 500-Tbyte model--even after the overhead for data protection."

2:1 compression turns it into a dollar-per-gigabyte system, meaning an entry-level SkyEagle provides about 120 Tbyyes of effective solid state capacity for about $60,000. But as a pure solid state array, it's also delivering millions of IOPs; Skyera claims 5 million, but, even accounting for vendor hyperbole, it's several times what Violin delivers and up to an order of magnitude faster than something from Pure Storage.

Pushing the storage performance envelope using high-end boxes designed to slot into a typical centralized storage pool is still expensive, even at Skyera's $1 to $2 per gigabyte price point. A more efficient option for boosting storage performance, at least for network file shares, takes a different approach: Decouple capacity from I/O throughput. Avere has developed just such an architecture for NAS performance optimization using a time-honored technique--caching-- with a few innovative additions. The company's so-called "edge filers" are essentially scale-out hybrid DRAM and flash and/or SAS appliances (low-end devices use HDD; high-end systems use SSD) that sit between storage consumers--applications and end users--and NAS filers.

The boxes include up to 144 Gbytes DRAM (with 2 Gbytes NVRAM to protect data still unwritten to disk in the case of a power failure) and a maximum of 3 Tbytes flash, along with two 10-Gbit and six 1- Gbit Ethernet ports. The system acts as a read/write cache, with hot data going to RAM with the flash (or HDD) holding warm, less frequently accessed information. All changes to data within the Avere storage tier is written back to the NAS shares. The result is a two-tiered storage architecture that allows independently scaling storage performance and capacity.

Need more space? Scale out your NAS box or add an entire new system. Need faster performance or a larger hot working set? Add another Avere appliance.

Avere co-founder and CEO Ron Bianchinni says that the company often sees 98% hit rates (49 out of 50 read requests are in the Avere cache), meaning that, properly sized, the entire storage system starts to look like one big solid state disk. Sizing is another area where Avere offers great flexibility. Using a distributed file system (many of Avere's core technology team worked on the original Andrew file system at CMU) allows the cache cluster to scale up to 50 nodes, providing an aggregate millions of IOPs of performance and tens of GB/sec of throughput.

Its file system also supports a global namespace across heterogeneous NAS systems--meaning multiple, or even dozens, of NAS shares can be exposed as a single mount point. Thus, for file access, Avere provides a means of storage virtualization. The combination of caching acceleration and a single namespace makes Avere appliances ideal for remote offices, giving users blazingly fast performance while virtually eliminating IT administrative overhead since the core NAS filers and access controls are all on central systems.

While software-defined storage--namely, the ability to abstract physical storage resources, whether NAS shares, block devices or physical LUNs, into logical storage pools--is key to building cloud services, equally important is optimizing storage performance for applications and users that increasingly demand both speed and capacity. Whether through hybrid storage arrays like those from Dell EqualLogic, Nexsan or Nimble; tiered architectures like Avere's solid state caching appliances; or, for raw performance, storage systems designed and optimized for flash like those from Skyera and Violin, the tools for vastly improving storage performance while still keeping up with rapidly growing capacity needs have never been better nor more diverse.

Interested in how flash-based SSDs work and the various ways they can be deployed? Check out Howard Marks' session "SSDs In The Data Center" at Interop New York this October.

Kurt Marko is an IT pro with broad experience, from chip design to IT systems.


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers