Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Building High-Availability Networks With Windows Storage Server R2

We almost had a hurricane in Florida this week, and it's only mid-June. The entire Eastern seaboard, the Gulf region, the heartland and possibly even the Canadian side of I-95 is up for grabs to African-born hurricane and tropical storms. Alberto tried to disrupt our lives in Florida, even though he never made it to hurricane strength. What he did create was pandemonium for IT. And if that was not a big enough scare, Florida Power and Light (FPL) did something nasty in downtown Fort Lauderdale on June 8, and plunged a bunch of big corporate offices on Los Olas Boulevard into darkness for an entire day.

Businesses are now scrambling again to get disaster recovery systems upgraded or in place. Why now when the whole winter was available? People procrastinate. IT departments fight with management to get a budget for DR and failover equipment when there is no hurricane on the horizon; CFOs don't feel pressured to give up the funds and look for excuses to rob IT coffers. When a hurricane threatens or the lights go out, everyone suddenly wants the ultimate failover on the lowest of "bank-width."

What are our options? Over the next few articles, I am going to explore a few. First, let's be clear on the objective. We want to set up a system or systems to allow Windows Server to be replicated, services, data and all, to other locations. Why? So that when the next hurricane hits, or when a tornado strikes, or a dyke breaks, or an FPL engineer drops a prune pit into a transformer and the city goes dark, users will be able to continue as if nothing happened.

First, let's define two important words that will be prevalent throughout the architecture development: Replication and Failover. Replication is the most important aspect of any DR plan involving redundancy to another location. We must first replicate our data to remote servers and be sure the data is current before it can be used. Failover is the process of redirecting users to the replicated data in such a way they are not aware that the servers they are connecting to are now in Seattle and are no longer in Miami (assuming that Miami has been obliterated). Both failover and replication can be achieved in different ways, but solutions can't be implemented without a budget that requires more thought than sorting through a pile of credit cards for the best line to use.

Let's first look at the replication scenario. The first data you typically want to replicate to a remote location is your active directory data. Fortunately, this is easy to do because Active Directory and the Domain Controller services on which it exists have been designed to be widely dispersed. If you place domain controllers in your remote offices that are easily installed into AD, replication is handled for you. If you plan to offer failover for Exchange, SQL Server, your file server data, or any other data that resides on a Windows Server 2003 system, AD domain controllers must be within easy reach of the servers you need to "clone" in the new location. This is key.

AD "affinity" for services like Exchange or SQL Server (especially on clusters) needs to be reliable. As part of any HA solution, you thus need to place a DC in the same data center or rack as your servers, or they need to be accessible on a reliable network with reasonable bandwidth (at least 128KB reserved, but 256KB and higher if possible).

  • 1