As a backup administrator, do you completely understand all of the interrelationships between your critical multi-tiered applications? If you were asked to restore an entire server running a critical multi-tiered application, could you do it? How long would it take? Have you actually tested the process of restoring a critical application or database in your environment to validate that your backups will work when you need them most?
These are just a few of the questions that are sure to make most administrators cringe because most can't honestly answer yes. The fact is, understanding all of your exposure points from a backup perspective in a complex environment is a major challenge for any one person. Perhaps more difficult is finding the time and budget to build the proper lab infrastructure that you need to properly validate your backups.
Backups aren't fun, but they are critical to your organization, and there are new mandates being handed down from the CIO's office to backup administrators every day. Here is the blunt truth detailing the new rules of the game in no particular order:
- Of the top five ways to get immediately fired in IT, screwing up backups is perhaps number one.
- If you don't properly backup a critical application/update, don't expect your developers or the vendor to take the blame.
- If it takes you too long to restore an application, server, or other critical resource, you will be held responsible for a failed strategy.
It's never too late to deploy today's tools and change the way you do business in order to improve reliability and agility when it comes to your backup and restore strategy. To assist you, Network Computing has underwritten a Best Practices guide that illustrates ten operational and technology enhancements that you can make today.