In many ways, server virtualization has helped data protection. Servers are now encapsulated into a single file instead of thousands of files, capabilities like changed-block tracking have reduced the amount of data that needs to be moved to the backup target and new features like recovery-in-place or changed-block recovery promise to dramatically reduce the amount of time it takes to recover a server instance. But there is a gap in protection when you deal with the reality that most data centers are not anywhere close to 100% virtualized.
Depending on which study you read, most data centers are less that 50% virtualized. That means that more than half of the server population is still basically a single application running on a single physical server. In my experience, many of these servers are often not virtualized because they run some mission-critical application that is resource intensive and there is concern about placing that application in the highly shared virtual environment.
While legacy backup systems are gaining some of the features that virtualization-specific backup products have, they still lag in this area. At the same time, most of the virtualization-specific applications do not provide important legacy capabilities like complete application protection, robust data retention and tape support.
The result is customers end up running at least two backup applications in their environment. In fact, even in 100% virtualized environments, we have seen many data centers running two applications -- virtualization-specific backup for day to day protection and a legacy application to be able to create a robust archive of the virtual environment as well as to get a copy of it on tape.
The Solution To The Backup Gap
Running two or more separate backup applications to meet the needs of the enterprise is not looked on as ideal. As we discussed in a recent webinar "What's Breaking Enterprise Backup and How to Fix it", there are three viable ways to address the gap in capabilities of virtualized and non-virtualized products.
First, you can choose to wait for the virtualization backup software to add legacy features like protecting non-virtual machines and providing support for tape libraries. Second, you can choose to wait for legacy backup applications to add the features that have made virtualization-specific backup applications so popular. Finally, you can choose to wait for legacy backup applications to develop an API-like capability that would allow VM specific backup applications to plug into their enterprise features.
As you can see each of these options require some form of waiting -- but several vendors are closing the gap. We have seen several virtualization-specific backup applications add support for non-virtualized systems and a few have announced plans for tape support. We have also seen many of the enterprise backup applications add features that were once only available from virtualization specific backup applications. You can talk to these vendors to see what is available now and what their roadmaps look like and get a sense of who can get you to a single backup strategy the soonest.
The third option, the API or modularization approach, makes the most sense for the long term. As we discussed in our recent article "Enterprise Backup is Broken", backup software vendors have to come to the realization that no one vendor can do it all. The data center would be much better off if legacy enterprise backup systems allowed smaller point products to integrate into their legacy capabilities. We would have unified and best of breed data protection at the same time.
In my next column I'll discuss how -- if backup software vendors are not careful -- the disk backup appliance may end up being the central point of consolidation. Already today multiple backup applications can send data to a single disk backup appliance and disk backup appliance vendors are very focused on providing tighter integration with a broad range of backup software solutions.