George Crump

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

A Pretty Good Disaster Recovery Plan

Symantec just released its 2010 State of The Data Center Survey. In the survey, respondents were asked to rate their disaster recovery plan and only 12 percent rated it as excellent. Even if you add in the 27 percent who thought their plan was "Pretty Good," that means more than half thought that their plans were less than pretty good. Still, the choice of "pretty good" struck me. Who wants to execute a recovery from a "pretty good" DR plan?

What causes a disaster recovery plan to be pretty good or worse? My first guess is that there is too much data to deal with. Let's get rid of some of that. As we discussed in our article Archiving Basics, a solid archive plan should help eliminate a large portion of the problem. It will also eliminate the need for some of the complexity that is built into the backup process because people are using backups for long term retention of data.

My second guess is there are lots of extra copies of data being made. I have seen data centers taking no less than six copies of their most critical data. They are snapshotting it, doing internal application backups of it, backing it up with some sort of third party but application specific backup and backing it up with an enterprise backup application more than likely to a disk based backup target that makes it own copy of itself. This is not to mention all the replication processes going on: applications are replicating, storage is replicating and backup devices are replicating. Isn't this too much?

With all of these extra copies of data being made, it's no wonder that we are all running out of storage space or at least struggling with how to manage it. I hope that the hard drive suppliers come out with 8TB hard drives in a hurry and that the dedupe vendors uncover the secrets of quad-phased deduplication.

The answer is STOP. You really don't need another copy of data. You need one that is real-time enough to meet your emergency recovery needs, and you need one that provides some minimal point-in-time granularity. Remember, long term retention should be the sole domain of the archive. These copies should then be replicated off-site in case something goes wrong at the original site. Ideally, these can be provided by one process. If not, they should be managed as part of an overall backup workflow.

Page:  1 | 2  | Next Page »

Related Reading

More Insights

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers