Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Disaster Recovery In The Cloud: Key Considerations

When the notion of cloud computing first arose, there was a lot of talk about how wonderful it was to have an affordable, manageable “secondary” data center available in the case of an emergency. Enterprises are no strangers to the need to have a secondary, or backup, data center as a means to address potential disasters. Many have them, but in the wake of cloud, some have migrated from owning their own to renting one in the cloud. For those who were just growing to the point they needed such a disaster recovery option, many turned immediately to cloud because why would you invest in a physical data center when there was a veritable smorgasbord of options available in the form of cloud computing?

Like traditional DR architectures, employing a cloud-based secondary site requires more than just replicating your data center. Data changes constantly and vigilance is required to ensure that those changes are replicated in a timely fashion to wherever your secondary site might be. The form that takes depends on what your plan of record may be in the event of a disaster, and indeed, what you consider a disaster.

Because it’s always live and available, the cloud offers the ability to support multiple levels of disaster recovery. From a single application to the entire data center, there are myriad ways in which you can ensure the availability of those applications and data critical to the business. Moreover, the variety of resource cost models available from cloud computing providers enables organizations to pick and choose ones that support hours to recover or the traditional high-availability model enabling immediate failover at scale.

Of course, organizations can fall back (see what I did there?) on a traditional “all or nothing” model in which the entire data center is replicated live in the cloud and takes advantage of global server load balancing to immediately redirect users in the event the primary site is knocked out by a disaster or unanticipated loss of connectivity. The advantage of such an approach is that organizations can leverage both in an “active-active” architectural model, where factors like geographic location, performance or even utilization can determine in real time to which “data center” a user is directed. In situations where both data centers are active to avoid even a short period of down time, such a model can be beneficial to the organization by providing benefits other than its obviously existential value.

To be fair, such an approach is costly over time. Maintaining two centers with as close to real-time replication of data is not easy, nor is it cheaply realized, even in the cloud. But the return on that investment can be valuable, particularly if your primary data center is in a geographic location known to be vulnerable to natural disasters. If you’ll allow me to completely slaughter my metaphors, a penny of prevention is worth a dollar of cure.

cloud servers

Less costly approaches might involve a cloud-based DR architecture in which only critical applications are “hot” and others are put in stasis unless needed. Depending on the expectations of users, partners, and employees, organizations can balance operational costs against an agreed-upon SLA that includes time to recovery in the event of a disaster. Applications and systems considered non-essential may not be available at all, while those considered critical will be immediately accessible and those in the secondary tier might be granted hours or even a full day before being accessible again. The timeliness with which you agree to restore access in the cloud will determine how frequently you need to replicate their respective data and thus impact the overall cost to maintain.

Regardless of your approach, this is where understanding your cloud provider becomes invaluable. One of the benefits of cloud providers is that they tend to have their physical data centers in many locations, and you need to be aware of where you’re putting your backup data center. It makes little sense to use the cloud if it is physically in the same city as your primary data center and therefore vulnerable to the same disasters as your own. Try to ensure that -- just as you would with a physical secondary data center --  your cloud-based secondary data center is geographically dispersed from your primary.

If you’re one of the few organizations that runs entirely in the cloud, get thee a second cloud provider now. There have been very few instances of cloud providers having significant outages (knock on wood) but then again, there are few instances of organizations truly needing the disaster recovery services of its secondary data center, either. Not having a plan as to how to deal with a failure of the cloud is akin to not having a disaster recovery plan at all.

And given that the entire notion of disaster recovery is based on the premise that a random, unpredictable event will knock your business out of action by cutting it off from the internet, if you’d execute on a DR plan for a physical data center, you should probably plan to execute on one for a cloud-based data center.

Better to have an executable plan no matter where your primary data center is housed than to end up a showcase slide in someone’s PowerPoint deck on data center failures.

Hear more from Lori MacVittie live and in person at her session "Operationalizing IT With Automation and APIs" at Interop ITX, May 15-19 in Las Vegas. Register now!