David Hill

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

IBM Pulse 2012: A New Storage Hypervisor

IBM is promulgating a storage management concept that it calls a “storage hypervisor” though the final product name has not been determined. The company claims the technology will offer benefits – such as better storage utilization – leading to better storage cost economics, and data mobility – leading to increased flexibility, such as non-disruptive storage refreshes – and parallels the acceptance of server hypervisors and virtualization. But there are also broad implications about how storage will be deployed and managed under IBM’s hypervisor solutions and strategy that you may find worthy of attention. Let’s see why.

Last week’s IBM Pulse 2012 conference in Las Vegas had the imposing sub-title “Optimizing the World’s Infrastructure.” And the company attacked a broad range of both physical and digital infrastructure issues under the now-familiar – at least to the IT world – integrating concepts of Smarter Planet and Smarter Computing. But Pulse attendees wanted not only an overview of the big transformational trends in IT, but also to be able to drill down in their areas of particular expertise so that they could return home with a game plan or set of action items based on what they had learned. Rather than going into breadth of coverage from Pulse 2012, I will concentrate on an area of focus for me – storage management – to illustrate that type of specialization.

Storage management was such an area of particular attention for specialists, with emphasis on the concept of the storage hypervisor that IBM is working diligently to stress. Now, all IT roads lead to storage – as without data processors and networks can do no useful work and all data not in transit has to reside on some form of storage medium – so deploying and managing storage more efficiently and effectively is critical, not only to today’s storage infrastructure operations, but also as a cornerstone to moving to the cloud that offers true IT-as-a-service.

Enter the storage hypervisor in general, and IBM’s Storage Hypervisor in particular. Now the term “storage hypervisor” is not generally accepted currently in the IT industry, as only two smaller companies – DataCore and Virsto – in addition to IBM, seem to be advocates of the term. Moreover, other terms, such as “virtual storage,” may be used instead with different approaches that yield the same essential capabilities. Still, after understanding what it does, the term should provide a good mental recall mechanism for understanding what is happening, and should happen, to the underpinnings of a hypervisor-enabled storage infrastructure.

For simplicity’s sake, I’ll focus on what IBM is offering. Note that while “storage hypervisor” may be a concept, IBM implements the concept through real products. The company believes the storage hypervisor is a combination of application software that performs the necessary storage virtualization functions, and management software that provides the centrally, automated framework for all virtualized storage resources. The “actor” software underlying the whole thing is IBM’s System Storage SAN Volume Controller (SVC), and the “director” software is the IBM Tivoli Storage Productivity Center (TPC). To this, IBM also adds the IBM Tivoli Storage FlashCopy Manager, as it feels that the special snapshot capability incorporated as part of the storage hypervisor is an essential ingredient.

Now, the first question that many might ask is: Isn’t the storage hypervisor simply a re-bundling of existing IBM products? While the answer at first blush would be yes, a little closer examination says that the synergies that this new combination brings might not have happened if the products were used individually. Moreover, putting the combination under the rubric of storage hypervisor better aids in understanding what it does, its benefits and the larger implications.

Obviously, the use of a storage hypervisor invokes the concept of the server hypervisor in the minds of CIOs and other IT professionals. The server hypervisor – and server virtualization – were once relegated to enterprise-class mainframe computing environments, but are now considered a “good thing” – albeit with some caveats, perhaps – in server systems of most every type. Although storage virtualization has been around for a long time, it has not realized the same level of attention or success that has occurred on the server side. That has to change, as explosive data growth and tight budgets can no longer keep up with just the falling cost of storage alone. Thus, IBM’s storage hypervisor may provide a mental rallying point around which a next stage in storage infrastructure evolution can take place.

A storage hypervisor creates a single pool of managed storage that can span across multiple storage arrays or even JBODs (Just a Bunch of Disks) boxes. Now, virtualized storage (even in a single array) divides up storage in a SAN array differently than the traditional method. Traditionally, shared storage in a SAN means that each application is allocated a portion of the available physical storage, based on a guess of what it will need over time.


Page:  1 | 23  | Next Page »


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers