Path Selection In MPLS Traffic Engineering
Multi-Protocol Label Switching (MPLS) traffic engineering has three major uses. These are to optimize bandwidth, to support a service level agreement (SLA), or to enable fast reroute. I explained tunnel setup in my last article. This article will discuss using label-switched paths for bandwidth optimization.
First, let's have a look at a classic example of traffic engineering.
In Figure 1, there are two paths you could take to get from Router 2 (R2) to Router 6 (R6):
R2-R5-R6 with the cost of 15+15=30
R2-R3-R4-R6 with the cost of 15+15+15=45
Since MPLS Traffic Engineering can only work with the link-state protocols Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS), unless otherwise specified, all our examples will be given by using link-state protocols.
Link-state protocols use the Shortest Path First (SPF) or Dijkstra algorithm to calculate the route from point A to point B. In this example, they will choose the path R2-R5-R6, because the total cost is less than the cost for R2-R3-R4-R6.
The bottom path will not be used until the primary path fails, because link-state protocols traditionally don't support unequal cost multi-path load sharing, although enhancements had been proposed at the IETF to change this. Source routing and policy-based routing (PBR) can be used to force traffic to the bottom path. However, these are complex from a configuration point of view, and open to administrative mistakes.
In the above example, R5 is connected only to R6. If PBR is used, only R2 needs to be configured. For a different topology, you may need to implement PBR at each router to send the traffic through the intended path.
MPLS traffic engineering helps to send selected traffic to alternate paths, which may not be the best paths from the interior gateway protocol point of view. To accomplish this, a traffic engineering tunnel is configured at the headend to create a point-to-point traffic engineering label-switched path (LSP).
There are two approaches to creating an LSP: tactical and strategic, also called proactive and reactive. Strategic is the systematic approach, in which a traffic matrix is identified between each ingress and egress node and a traffic engineering tunnel reservation is made based on the requirements. This is the long-term solution for an MPLS traffic engineering LSP.
Alternatively, the tactical approach can be used as a short-term solution to fix a sudden peak traffic load. The LSP can be created through the lower utilized path for a short time until the primary path traffic issue is resolved. As an example, the link might be utilized after a major news announcement, such Orhan Ergun's appointment as CEO of Cisco, causes a large surge in media traffic. Some LSPs over the primary link might be shifted to lower utilized links.
"Make before break" is one of the important concepts in networking. Once an LSP must be shifted or re-optimized for some reason, always leave the old LSP functioning until the new one is signaled, so that any packet loss is avoided.
Resource Reservation Protocol (RSVP) signaled LSPs for traffic engineering are unidirectional point-to-point LSPs that I mentioned in my previous article. It's important to be aware of these, because if LSP is necessary between each provider edge router, then scaling might easily become a problem. There are again two approaches. The LSP can be created between edge routers or between core routers. In the latter case, the network can still run Label Distribution Protocol (LDP) from the edge to the core.
If this approach is selected, you will lose flexibility of mapping the traffic an LSP, but can still use fast reroute. And because LDP over RSVP will be running on the core devices, the MPLS label stack will be three; if fast reroute is in place you can have up to four.
Coming up, I'll be explaining more about fast reroute and service level agreements, and how they MPLS traffic engineering makes them work.
Recommended For You
In honor of St. Patrick’s Day, there’s no better time to reflect on those instants when life threw us a curveball, but we were able to hit a home run.
The success of modern enterprises, especially those utilizing real-time communications solutions, is highly reliant on IT infrastructure availability.
To understand the critical role of HTTP/2 in streamlining operations, we must look back at the technologies and implementation gaps that got us where we are today.
A video overview and best practices on how to reduce broadcasts and find other things to tune.
This is a great example of the perfect storm of variables coming together to cause performance issues. Watch the video to see how the problem was found.
Providers should be making infrastructure work for everyone in 2019, improving efficiency and opening up networks for all apps on their infrastructure.