Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Storage Pipeline: Review: Out of the Abyss

Enter storage-resource management, an approach Legato (now a division of
EMC), Tek-Tools, Veritas and several other vendors tout with their various
suites and frameworks. SRM comprises a collection of software tools for
discovering where storage is located and how its capacity is being used. The
more sophisticated products monitor storage hardware capacity utilization and
other storage-related software processes, such as data replication, volume
management and backup, to provide alerts about potential problems in near real
time.

SRM Timeline

At the turn of the century, SRM was touted as a silver bullet. The software
was to enable fewer managers to oversee more storage and to provide ample
warning so that steps could be taken to solve problems before they created
downtime. By 2001, however, SRM came under attack for being too platform-focused
to address the real goals of storage management. Critics said you couldn't
manage storage by taking the temperature of various infrastructure components;
you needed to look at what effect storage was having on application performance.
BMC Software spearheaded efforts to deliver an application-centric, policy-based
approach to storage management that could drive labor costs out of storage
investments.

Many SRM vendors seized upon this idea to create complex products typified by
an automated, application-aware storage-management software stack. These stacks
comprised a policy engine riding atop a storage-virtualization layer (virtual
volume managers for block storage and global namespace products for file
systems) that automated certain tasks, such as growing virtual volumes to meet
burgeoning application data storage requirements, and improved single manager
productivity. Work continues in this area as many brand-name management-software
vendors begin porting their technology into storage-switching platforms or
multifunction storage-management appliances.

But the story doesn't end there. Over the past year, vendors have started
chanting yet another mantra: data life-cycle management. In this approach,
information about applications and storage platform capabilities and costs is
used to create a knowledge base that lets policy engines create rules for
automating the smooth migration of data across different storage devices.
Instead of focusing on the application or storage infrastructure solely, this
model makes data its centerpiece.

According to EMC's Legato and others embracing this vision, the data
life-cycle management revolution will be more effective at automating management
and reducing costs than any hardware or software innovation to date. It will
create the optimal storage-utility infrastructure, in which capacity is
allocated dynamically and automatically to applications that need it, and data
will move from platform to platform automatically, based on its usage
characteristics, retention requirements, platform costs and other factors.

But there are limitations in the life-cycle management products now emerging.
One of the most nagging is the absence of an open standard for equipping data
with a self-descriptive header that would identify its requirements or
originating application. This header would help automated management tools move
the data around. Currently, all the approaches for data self-description are
proprietary and limited. However, work is under way at both NASA's Jet
Propulsion Laboratory and the International Standards Organization to create a
standardized naming convention (see www.iso.org/iso/en/commcentre/pdf/Data0001.pdf).

  • 1