Evaluating Backup Bandwidth with a Portable Wan Emulator
I’ve been using WAN emulators for well over 20 years and tried all sorts of Windows, Linux, and appliances.
Personally, I prefer an appliance instead of trying to build my own since I don’t have time to deal with maintaining another computer, figuring out which update messed things up, and wondering how accurate it really is.
Got a call from a client who wanted to locate a backup server offsite and was trying to figure out how much bandwidth the application requires. He was told by the vendor that a 100 Mb link will be fine. After calling several carriers, he found out that some can’t provide a 100 Mb link at the recovery location, and the ones that can provide a 100Mb link want an absurd amount of money.
I spoke to the client about the backup server and how it interacts with the master server and what traffic would be expected. I explained the best approach would be for me to spend some time emulating various bandwidth scenarios between the servers. The goal would be to determine the bandwidth requirements.
Fortunately, I knew other clients who use various carriers in the proposed area and gathered some latency, packet loss, and bandwidth values. I chose to use 800 Byte packet pings since the servers use that average packet size.
It is important to understand how your WAN emulator behaves. Some products ask you to input the host and network characteristics and it will predict performance and application time. Unfortunately, this requires some knowledge of all those variables. Others are referred to as ‘modeling’ tools with a slightly different twist in that you can provide real device configurations, logs, or trace files.
In this case, I used my portable Apposite Linktropy Mini-G WAN emulator because I wanted to use the real hosts and application since I had no time to perform an application baseline.
Methodology and document is critical when conducting these exercises. Mine started with sitting down with the client and having him show me a few important tasks or monitoring exercises. I took notes and screenshots to document what was done and the expected results. We also noted how responsive the application was.
Then I asked the client what bandwidth he would like to emulate and he replied 100 Mb and 10 Mb. I then added that we should use the 1 Gb connection as a reference point and baseline.
I suggested that we start with 10 Mb. We had a discussion to determine the best approach and the impact to the application. Since this was a lab, I had a lot of flexibility that I normally don’t have.
The plan is to disconnect the standby server, forcing all traffic to the master server and disconnecting the second backup Ethernet connection to ensure all traffic went through the single Ethernet port.
After installing the device inline, we confirmed that the server was online and then the client repeated all the tasks we documented previously. This part was interesting because even though it had only been about 30 minutes, he had already changed the process and skipped a few steps. When I showed him what we did earlier, he chuckled and we went through it again. This documentation can be used if I wanted to repeat the test or automate it with the various programming or macro packages.
At the end of the tests, we both determined that all ran well at 10 Mb, but when we went back to the server we saw a “synchronizing files” along with a list of files. I then asked what that was all about and he said that when the connection between the master and slave is broken and reconnected, the application performs a database synchronization. I then suggested we use that as another test point.
Below is a screenshot of the Linktropy Mini G and our settings for the 10 Mb test. For most of my labs, classes, and engagements the Linktropy Mini-G fits the bill but there are various other models to meet your needs.
The results require a bit of interpretation and the client interview was key in understanding and reporting a conclusion. At first glance, it looks like 10 Mb is clearly the loser.
The point that puts this time into context is that the synchronization only occurs when the servers loose connectivity. After performing some quick captures, I determined that the heartbeat packets are sent every 90 seconds and the regular database updates are only approximately 2.5 MB when required. The real determining factor is that there was no performance degradation while we were set for 10 Mb.
Now 10 Mb seems to work with a note that a full database synchronization takes much longer. The client said that this is not a deal breaker since he can still use the application during the synchronization process with no real performance hit.
Watch the entire process here:
Recommended For You
The Interop 2019 speaker discusses ways that enterprises explore DevOps, the skills gap, and the rise of security as code.
Composable infrastructure provides a cloud-like experience for provisioning resources. Understand how it works and how it differs from Infrastructure as Code.
Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex.
Connected devices and powerful analytics are transforming colleges and the world around them. Many developments have direct application to the enterprise.
Complexity and rapid technological advancement are making data center environments difficult to navigate.
Opensource software depends on community contributions to projects, even projects maintained by organizations. Contributing back to a project improves the project for all.