4 Ways To Ease Network Bottlenecks

Learn about load balancing and other techniques for improving network throughput.

NetworkComputing logo in a gray background | NetworkComputing

When it comes to tackling network bottlenecks, networking pros have often relied on a limited number of techniques:  Increase link throughput, configure port channeling, or integrate quality of service (QoS). While these are all still valid methods, network engineers have a few additional tricks up their sleeves in 2017. Here are four modern ways that engineers ease congestion throughout various parts of an enterprise network.

1. Load balancing

An organization's connectivity to the internet is a common spot for network congestion. Many businesses spend money to install two or more internet links to provide full redundancy. The problem is, the Internet connections are commonly setup in an active/standby configuration, which means that only one Internet link can be used at a time. The secondary connection sits idle until the primary stops routing traffic. So, while primary link utilization grows to the point where it becomes a bottleneck, a company is paying for additional throughput, but unable to use it.

The Border Gateway Protocol (BGP) is often implemented in large enterprise networks  to take advantage of internet load sharing. Yet, configuring and tuning the exterior gateway protocol can be tricky and cost prohibitive. Today, we have an alternative for load balancing Internet connections: Many network vendors offer load balancing appliances to provide added redundancy as well as the ability to utilize two or more Internet pipes simultaneously. Cisco's cloud-based Meraki firewalls are a great example of this.

2. WAN optimization and SD-WAN

Another common spot for network bottlenecks is a company's wide area network (WAN). Because the WAN often leverages leased lines from service providers, it's impossible to simply increase throughput without incurring significant costs. Fortunately, two technologies have emerged over the past few years that help reduce WAN congestion without having to upgrade leased line throughput. The first method, WAN optimization, is deployed via  appliances installed at either end of the WAN connection. The appliances use a series of software-based optimization tools to squeeze out as much efficiency on a link as possible. Techniques include compression, caching, data deduplication, and traffic shaping.

network

networkconnect.jpg

The next evolution of WAN optimization is SD-WAN, which adds another layer of software-based optimization intelligence into the mix. An SD-WAN architecture creates a virtual overlay where multiple WAN connectivity options are aggregated together. The WAN links could be leased lines such as MPLS, VPN connections, or a combination of the two. The SD-WAN software then monitors each link and routes packets down the optimal path at any given point in time. If congestion is ever detected on one of the WAN connection options, traffic is routed around to avoid a bottleneck.

3. Virtual Port Channel (vPC)

Port channeling – a link aggregation technique -- is the process of logically combining multiple physical links together to increase bandwidth between two network devices. This basic technology has been around for decades and has been incredibly useful in eliminating bottlenecks between network router and switch uplinks. The problem with traditional port channels is when you attempt to add redundancy by connecting to multiple upstream devices. This is known as a dual-homed connection. It’s here that Spanning Tree Protocol (STP) rears its ugly head when using traditional port channels. So instead of having multiple, distributed uplink connections with which to forward traffic, STP blocks one of the uplinks and only enables it when the preferred uplink fails.

The idea behind vPC -- or similar virtual chassis technologies -- is to eliminate the need for STP to operate between the downstream switch and the multiple upstream switches it's connected to. The switches are configured to all work in conjunction to create a single port channel across three or more switches. This provides the necessary redundancy required on many networks as well as the ability to utilize all available bandwidth across aggregated links.

4. Leaf-spine architectures

A more recent bottleneck that's cropping up in data center networks stems from the increased use of virtualization technologies, which is causing east-west data flows within the data center to rise. Legacy data center networks were designed for bare metal servers that consolidated all application resources within a single server. Therefore, east-west bottlenecks were rarely an issue.

But now, the bandwidth between compute, storage and the network is a major concern. One way to address this bottleneck is to move from a traditional three-tier network design to a leaf-spine architecture in the data center. This technique creates a full mesh of network connectivity between the data center access layer (leaf nodes) and the aggregation layer (spine nodes). Since all nodes are connected, every resource within the data center is the same number of hops away. Additionally, each uplink between the leaf and spine nodes can be utilized to its fullest extent.

As new technologies expand and stretch the capabilities of production networks, they often creates bottlenecks on the Internet edge, in the WAN, along switch uplinks and within the data center. Fortunately, the number of tools network engineers have to alleviate bottlenecks is growing as well.

About the Author

Andrew Froehlich, President, West Gate Networks

President, West Gate Networks

As a highly experienced network architect and trusted IT consultant with worldwide contacts, particularly in the United States and Southeast Asia, Andrew Froehlich has nearly two decades of experience and possesses multiple industry certifications in the field of enterprise networking. Froehlich has participated in the design and maintenance of networks for State Farm Insurance, United Airlines, Chicago-area schools and the University of Chicago Medical Center. He is the founder and president of Loveland, Colo.-based West Gate Networks, which specializes in enterprise network architectures and data center build outs. The author of two Cisco certification study guides published by Sybex, he is a regular contributor to multiple enterprise IT related websites and trade journals with insights into rapidly changing developments in the IT industry.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights