1. Understand current protocol level performance Document two things: all applications that follow the same path as the cloud applications will and any application that will ultimately be ported to the cloud, regardless of path. Most cloud applications are TCP-based, so you want to focus on packet loss and delay flow metrics. For real-time UDP applications, like video or voice, add in jitter. You can get bandwidth requirement metrics from your application vendor in most cases, but they rarely have benchmarks for packet loss or delay. Based on our experience, a good connection to the cloud has less than 200 ms of one-way delay and less than 0.05% packet loss. You can have more loss if your delay is lower and you can have more delay if you don’t drop any traffic. TCP doesn’t do well when you have both high delay and high loss. Make sure you have enough bandwidth, obviously, but don’t assume that because you do the cloud will perform well. I have seen entire e-commerce sites crippled due to firewalls with duplex mismatches that introduced high packet loss.
2. Test the network to the cloud There are lots of ways to do this. The best way is to have a pilot or beta application running that you can test to. If the application is Web-based, you can use synthetic transaction tools to measure both the TCP connect times and the full application process time. Some examples are Perl’s WWW::Mechanize module, Cisco’s IP SLA HTTP operation and a host of other commercial applications. Remember: The main thing we want to test at this point is the underlying network performance, not the full end-to-end application. If you need to, you can do a packet capture (using tools such as TCPDump, Wireshark or Argus) of the transaction and check for loss and delay. If you can’t get the actual application going, or it doesn’t lend itself to testing in this manner (maybe it has two-factor authentication), then an equally viable test for the network is to simulate traffic that matches your application’s network profile into the cloud. For example, getting an Amazon AMI up and going with open source software like iPerf or DITG (my favorite) is a great way to test the service level into their cloud. These are open source tools with a dizzying array of options that let you measure the service level you will get from the network.
Once you have these two data sets--the actual traffic and the simulation traffic--you can compare them. The power of cloud computing, from the end users' perspective, is not in its differences from their normal operation but in its similarities. In other words, they want the applications to perform at least as well as they did before. So, use the metrics you have for the current application deployment to compare with your simulations and see if you are truly ready to use the cloud