David Hill

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

IBM Pulse 2012: A New Storage Hypervisor

The problem is that trying to predict future storage use on an application-by-application basis is likely to be doomed to failure and leave a lot of unutilized space. For example, let’s say that Application A “rented” a two-bedroom storage “apartment” but only used one bedroom. That leaves a lot of wasted space. On the other hand, Application B is running out of space in the one-bedroom storage apartment it once thought was more than big enough. Reprovisioning in a typical storage environment is hard. Using an approach called “thin provisioning,” the storage hypervisor offers applications virtual storage “apartments” that provide as much storage as they want – within reason and as long as the overall physical storage is not exceeded. Note that an analogous process occurs with virtual machines on a physical server.

The storage hypervisor also decouples storage services from underlying physical media, i.e. disks. What does that mean? A SAN storage system derives its value from both its associated hardware and software. The hardware provides both higher availability – such as no single point of failure – and performance – such as the use of a sophisticated controller cache – to provide differentiation from, say, JBODs. The software provides added value in what are called storage services, such as remote mirroring or replication capabilities, and various forms of snapshots.

These capabilities are essential for many tasks, including such data-protection activities as local backup and remote disaster recovery. However, these capabilities are traditionally associated with physical media, such as particular disk LUNs (logical unit numbers). When an application needs to add LUNs or change existing LUNs, the process may not be easy. A storage hypervisor essentially changes the paradigm to data-centric storage services – designed to meet the often rapidly changing requirements of applications/information – rather than media-centric storage services – limited to the characteristics of physical disks, tapes and arrays.

That means that in hypervisor-enabled environments, storage services accompany the data and data can easily be moved virtually from one physical instantiation to another. Moreover, it is easier to apply different storage services to different sets of application data; for example, mission-critical data requires greater high availability (HA) requirements for processes such as remote mirroring, as contrasted to year-old e-mails that have to be safely preserved for e-Discovery purposes, but do not have to be instantly retrievable.

It should be obvious that life for an administrator utilizing the storage hypervisor schema can be devoted to more value-added tasks since a combination of IBM SVC makes storage virtualization happen, and IBM TPC makes not only managing changes themselves, but also managing at a more granular level, significantly easier. That IBM’s management process can address data on virtual volumes across multiple tiers of storage – including tier 0 SSD flash memory, tier 1 FC/SAS, and tier 2 SATA – across disparate storage systems – such as IBM XIV and DS8300 systems, but also with storage arrays from another vendor – and from site-to-site – with probably some reasonable limitation on distance – is the icing on the cake. Although no cloud is required, a storage hypervisor sounds like an essential good mix-and-stir ingredient for a private, public or hybrid cloud.

At IBM Pulse 2012, IBM customer Ricoh testified to benefits, such as cost savings, it had garnered by using IBM-originated storage hypervisor products. That is a type of benefit that most customers would strive to achieve, but IBM likes to use an example that makes data mobility without disruption – i.e., no downtime of applications and their data – even more dramatic than migrating data from one array to another during a technical product refresh. Let’s say that you have one site in the impending path of a major hurricane and another site situated safely outside the potential path of destruction. Let’s say that you have a server hypervisor (such as VMware vSphere for Intel servers and IBM PowerVM for Power servers) and the IBM storage hypervisor platform. With IBM SVC Stretched-cluster – part of the IBM Storage Hypervisor where SVC supports servers and storage at two geographically separate sites – the same data can be accessed at each site, and it also supports the ability to perform a VMware vMotion and IBM PowerVM Live Partition Mobility (LPM) move non-disruptively to end users. Can your sites do that?

But wait, there’s more. In moving to a cloud, a services catalog is essential so that users can easily select the services they need, what IT-as-a service is all about. The implementation of a storage hypervisor enables the development of a storage services catalog. IBM believes that each company has roughly 15 different data types – such as e-mail, database, word processing documents and video – that each requires distinctive service levels across four dimensions; capacity efficiency, I/O performance, data access resilience and disaster protection.


Page: « Previous Page | 1 2 | 3  | Next Page »


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers