Route Optimization: Route Optimizers Put You in the Driver's Seat
Route optimizers maximize performance, help honor usage thresholds and ensure your providers are living up to their SLAs.
December 5, 2003
The way to make any connection more reliable is with diversity and redundancy. Many enterprise networks have redundant routes with redundant connections to the edge via diverse physical paths. Although such a setup adds cost and complexity, we consider this insurance well worth the trouble. Accomplishing this same duplication on the Internet, while not cheap, is possible by using multiple ISPs to provide redundant and diverse paths from end to end. Although this approach can be difficult and expensive, it is de rigueur for any organization serious about delivering applications reliably over the Internet. The question then becomes whether you develop these diverse paths yourself, outsource connectivity to a mega provider while keeping Web servers in-house, or outsource the whole shebang. There are pros and cons with each tactic.First, the gotcha: Having multiple ISPs doesn't guarantee true redundancy. For example, what if physical access to both ISPs is provided by the same local carrier, which has only one path into your building? Answer: One backhoe could put an end to your best-laid plans. You need to establish multiple, divergent physical paths into and out of your building. This can be provided via a Sonet service from the carrier, but be aware that Sonet doesn't guarantee diverse physical paths unless it is designed correctly, and it might not be cost effective for you or the carrier.
You also must be clear about your goals. Do you want to load balance to get the most use of both links, or purely provide redundancy? If you load balance your connections among multiple links, and start using more than 50 percent of the capacity of both, you won't have full redundancy: If one link fails, the other won't be able to support the additional traffic. A situation like this could be as bad as having the link go down completely because expectations are raised, and performance becomes unpredictable. If you haven't oversubscribed too much, you can compensate somewhat by prioritizing traffic.
Then there's the bottom line. The cost of having two expensive links each being used at less than 50 percent of their capacity, not to mention all that extra hardware, is considerable. Be prepared to make the case that there is a cost for true redundancy, and decide if there is a business case to justify it. This problem may be compounded by lingering fears that many ISPs are on shaky financial ground. Furthermore, low-speed Internet-access link prices will increase by 25 percent through 2005, according to Gartner. Choosing an established ISP as your main provider can give you some flexibility to use a less-known (and possibly less-expensive) provider as a secondary. Some suggestions to make your case:
• Consider entering into a contract of sufficient length so that installation fees will be waived.
• In exchange for a long-term contract, which benefits the ISP by reducing churn, ask for a price-stabilization clause.• Do your due diligence. Vendor-viability metrics include cash flow, cash burn rate, cash on hand and total debt. If you're unsure, consider including exit clauses triggered by indications of financial straits or missed SLAs (service-level agreements).
Another potential problem, especially if you go the well-known/small player route to save money, is that if you have two ISPs, and one of them accesses the Internet through the other, you are still dependent on one ISP. Before you sign up for a second provider, investigate its network design. If the provider shows any reluctance to provide this information, walk away.
Of course, using multiple ISPs is not without problems. For example, if you're using Internet VPNs, your best bet is to make one provider the primary ISP for all your locations. That way, there's no doubt about where to point the finger if user VPN connectivity goes south. It also means that, if your primary ISP offers various levels of IP-based QoS, you can reap the benefits. The higher class of service that you purchase for some of your packets with your lead ISP will be meaningless if your traffic traverses a second ISP link. Although having one ISP provide separate, redundant paths for your site will be less expensive than going with separate vendors, your eggs are still in one provider's basket.
Another point to consider when you use two ISPs is their willingness and ability to advertise your network. There is no guarantee that this will happen, especially if you have a Class C or smaller network. One way around this is to use one of the DNS-based route optimization products that we review in "Mapping Out the Best Route". These products require that you use multiple external addresses that are advertised by only one ISP.
If you decide to advertise your network via multiple ISPs, you will have to peer, or "multihome with," them via BGP, the routing protocol that runs the Internet. This means that you will have to buy routers that have sufficient memory and CPU stamina to process more than 100,000 BGP routes from each ISP. You also will need to have someone on staff, or a consultant, who understands BGP. These skills don't come cheaply.In addition, you will have to apply for your own AS (Autonomous System) number, which identifies networks on the Internet. Every private network and ISP has an AS number. In fact, when networks are routed via BGP, the protocol bases its choice of the best routing path on which has the fewest AS numbers. The number of routers that have to be traversed and the speeds of the links are not factored by BGP into the route-choosing equation. Although the protocol comes with lots of tweakable options that can improve performance, it takes beaucoup skill and time to tune a network optimally. For large enterprises, the BGP-based route-optimization products we tested are worth every penny because they tune BGP to your advantage automatically.Given the need to monitor multiple ISPs' topologies and the fact that multiple providers also equal multiple bills to pay, SLAs to manage and helpdesks to call, outsourcing your Internet diversity may be an attractive option. One a company that provides access to diverse Internet paths is Internap Network Services Corp., an ISP that has connections to all of the other major ISPs. This can be a big advantage considering that many performance problems on the Internet occur where ISPs connect to one another. One disadvantage of Internap's approach is that the company's PoPs (points of presence) are limited in number, meaning that companies outside major metro areas may not have this option. Internap claims that some companies pay for long-distance leased lines just to reach one of their PoPs, but that's a pricey proposition (a location map is available here).
While we were working on this article, Internap finalized its purchase of NetVmg, one of the route-optimization vendors whose products we tested, and it also bought Sockeye Networks. Sockeye provides a remotely managed route-optimization service and its product did not meet our testing criteria.
Take It Away
A third option is to outsource your Internet connectivity to a hosting provider, thus making all of the difficulties that we outlined above no longer your problem. Although this simplifies matters on the customer end, you better ask a prospective provider a lot of hard questions before going this route. If the provider doesn't satisfy you that it is doing everything possible to assure high levels of access and performance, then move on to the next provider. You will want to ask what it is doing to provide redundancy and performance guarantees across the Internet. You will, of course, also want to know how it backs up and secures your applications, as well as what backup power its UPSs and generators provide.
Once you sign on, there are ways to keep your provider honest. If you are concerned about the performance of your ISP or hosting provider, or even your own internal hosting, a Web site monitoring service can monitor them for you in real time and provide alerts when there is a problem. They can also offer historical performance data; some will provide this data on specific applications and help you do capacity planning for new services before you roll them out. In our recent review of these services, Gomez walked away with our Editor's Choice (see "The View From There,"). Another well-known company in this market is Keynote Systems, which is known for monitoring the performance of the Internet via its Internet Health Report. Keynote says it has more than 1,500 computers scattered worldwide that constantly monitor Internet performance.Of course, no matter how much you outsource, you will still need reliable direct Internet access for critical traffic, such as e-mail, originating within your organization. There's no way around this.
As Good as It Gets
We all strive for perfection, but there's a limit to how much you can improve the performance of even the best ISP. The laws of physics add a certain amount of latency--if you do the math, you'll see that the speed of light will add round-trip latency of more than 30 milliseconds between the East and West Coasts. Some ISP network designs are such that your data could travel close to that distance just to get to a neighbor who uses a different ISP, and it becomes an even greater factor when dealing with international connections that span oceans. Every router hop, and every router that has to deal with queued-up packets, adds yet more latency. Still, when your business depends on reliable access, you need not let "best effort" be the last word.
Peter Morrissey is a full-time faculty member of Syracuse University's School of Information Studies, and a contributing editor and columnist for Network Computing. Write to him at [email protected].
Post a comment or question on this story.
The Internet's beauty is its resiliency--thanks to its military roots, the network was designed so that messages can be routed and rerouted in multiple directions, thus ensuring it can function even if parts are down or destroyed. That's the upside for those enterprises that depend on it for critical communications. But there's a dark side as well--for example, no centralized management--and problems do happen. Sometimes these are just blips, causing a couple of lost packets or a few extra milliseconds downloading a page. Sometimes, though, blip doesn't nearly cover it. For example, in January the SQL Slammer worm did a number on a big portion of the Internet, causing serious packet loss and completely saturating some circuits. Many ISPs--and, by extension, their customers--felt the pain.Still, though there's only one Internet, you can take steps to provide diversity and redundancy, the keys to reliability on any network. How? Use multiple ISPs to provide redundant and diverse paths from end to end. Granted, this isn't an inexpensive proposition, and you'll need to do your homework to ensure that the providers you're considering have truly divergent physical paths. But there are devices that will help make the most of your investment; these route optimizers not only will maximize performance, they also can help honor usage thresholds and ensure your providers are living up to their SLAs. We tested devices from RouteScience Technologies, Internap Network Services (formerly NetVmg), Radware and F5 Networks (see "Mapping Out the Best Route,") and found that, if you can handle the complexity of BGP, Internap's FCP 100 has an outstanding mix of features, functionality and management. If DNS is more your speed, Radware's LinkProof was manageable and scalable.Internet route-optimization products are only part of the solution, according to Peter Sevcik, president of NetForecast, a network technology consultancy that helps clients deal with application performance over the Internet. More can be done beyond improving raw performance over the Internet, and improvements are needed, he says, even though your overall performance picture may seem quite rosy. That's because many of the statistics that we see showing Internet latency and packet loss are averages, with a wide spectrum of performance among customers accessing any given Web site. Sevcik has done studies (available at www.netforecast.com) that show there will always be a group of clients receiving very poor performance, even though the average numbers may look quite good.
So what can be done beyond optimization? Sevcik is a big believer in using content-delivery networks, like Akamai's, as well as products and services that make more efficient use of network resources, such as accelerators from NetScaler, Peribit Networks and Redline Networks. He also advocates optimizing applications so that they are less dependent on the Internet's performance because, in the end, we can speed the Internet up only so much.
You May Also Like