David Hill

Network Computing Blogger

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

EMC VFCache: Project Lightning Strikes

EMC's recent announcement of the culmination of the code-named Project Lightning resulted in the new VFCache solution, a server-based flash cache, which may be used as a complement or alternative to flash storage that appears as if it were a disk drive. This lightning strikes twice, though not in the same spot. The first is dramatically improved I/O performance for customers and the second is the challenge that VFCache brings to competitors trying to distinguish their own flash storage solutions.

In old radio serials, episodes would start with a short recapitulation of "what has taken place so far" so that the listener would have the context to understand the latest episode. Let's apply that to flash storage.

The solid state disk (SSD) market, notably flash storage, has been a gold rush for startups as well as large vendors for some time now. The driver behind the increased use of SSD is what is called the I/O performance gap or bottleneck. As EMC pointed out in a recent analyst briefing, CPU performance improves 100 times each decade while HDD performance has remained flat (as the rotational speed for the fastest drives hasn't changed in years and is not likely to change).

But consider that while CPU performance from 2000 to 2010 increased by 100 times, by 2020 chips will deliver 10,000 times the performance of their 2000 counterparts. A storage device's inability to process CPU-generated I/Os fast enough (i.e. the I/O bottleneck) can be a significant problem in many cases today, but obviously is on its way to becoming more or less universally intolerable.

Ta da! (Sound the trumpets). Enter flash memory stage right with the potential of improving storage I/O performance by at least two orders of magnitude. How so? In large part because flash has none of the mechanical parts that inherently limit HDD performance. Is it any wonder that there is an SSD vendor gold rush on?

EMC was the first enterprise vendor to introduce flash in enterprise storage arrays in 2008 with SSDs that appeared to the OS and application on the server and the controller on the storage array as if they were simply disk drives. What was missing at that point was that enterprises needed the ability to use flash as a tier of storage (tier 0) where only the data that was most active (i.e., hottest) would be kept in flash, and less active data would be kept in another tier (such as FC/SAS tier 1 storage hard disks).

In 2009, EMC introduced software to do just this: FAST (First Automated Storage Tiering). This enables more effective use of the SSD tier and other tiers of storage not only from a performance perspective, but also from an economic perspective (as the relatively more expensive SSD storage only holds performance-sensitive data, which is typically a small subset of all data stored).

Page:  1 | 234  | Next Page »

Related Reading

More Insights

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers