09:05 AM
Orhan Ergun
Orhan Ergun
Connect Directly

Using MPLS Traffic Engineering To Meet SLAs

Learn how to control traffic levels and use the autoroute and forwarding adjacency features in MPLS traffic engineering.

As I discussed in my previous article, Multi-Protocol Label Switching (MPLS) traffic engineering has three main uses. These are to optimize bandwidth, to support a service-level agreement (SLA), or to enable fast reroute. I already covered how label-switched paths are used for bandwidth optimization. In this piece, I'll explain MPLS traffic engineering in the context of SLAs.

Traffic engineering can be used to meet an SLA. Not all traffic is the same, and not all customers can get the same service. This is business, and there is no free lunch, of course.

Traditionally, voice and video traffic were carried over circuit-based TDM links. These applications are very delay and loss sensitive, so we need to design our packet-switching networks to ensure that they are adequately supported.

MPLS traffic engineering and quality of service (QoS) can both be used -- either alone or together -- to accomplish this goal. These technologies are sometimes confused, but they are independent subjects and not exclusive. Reservation for the traffic engineering tunnels, however, is made on the control planes of devices.

As an example, you can have a 100 Mbit/s link between point A and point B. Assume you reserve bandwidth for two  Label-Switched Paths. with 60 Mbit/s and 40 Mbit/s link requirements. From the ingress, 80 Mbit/s of traffic can be sent over the 60 Mbit/s signaled LSP. Since, by default, MPLS traffic engineering tunnels are not aware of the data plane actions, 20 Mbit/s of traffic exceeding the limit will be dropped. Some of that dropped traffic might be very important, so it's in our best interest to protect it.

To make traffic engineering tunnels aware of the data plane traffic, the auto bandwidth feature of traffic engineering might be used. When auto bandwidth is enabled, the tunnel checks its traffic periodically and signals the new LSP with the "make before break" function. If a new LSP is signaled in this way, only the 80 Mbit/s LSP can survive over the 100 Mbit/s link. There is not enough bandwidth for the 40 Mbit/s LSP.

If there is an alternative link, 40 Mbit/s of traffic can be shifted to that link. Otherwise, circuit capacity must be increased or a new circuit must be purchased. If there is no alternate link and no time to bring in a new circuit, QoS could potentially be configured to protect critical traffic. DiffServ QoS with MPLS traffic engineering is mature and commonly used by service providers in these cases.

But how can one MPLS traffic engineering LSP beat another LSP? This is accomplished with the priority feature of the tunnels. Using priority, some LSPs can be made more important than others. To achieve this, the setup priority value of one LSP should be smaller than the hold priority value of the other LSP.

Once the path is computed and signaled, it doesn't mean that traffic by default follows the traffic engineering path. Actually, it still follows the underlying interior gateway protocol path. Since traffic engineering can work only with the link-state protocols Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS), traffic follows the shortest path from the cost point of view.

In the first article of this series, I mentioned some methods for sending traffic into the MPLS traffic engineering LSP. These were static routing, policy-based routing, class-of-service-based tunnel selection (CBTS), policy-based tunnel selection (PBTS), autoroute, and forwarding adjacency.

Static routing, policy-based routing and CBTS are static methods and can be cumbersome to manage. But to send specific, important traffic into tunnels, classed-based tunnel selection can be a good option. Based on the EXP bit in the label stack, traffic can be classified and sent to an LSP that is QoS-enabled for protection.

Autoroute and forwarding adjacency, on the other hand, are dynamic methods to send traffic into traffic engineering LSPs.

By default, the shortest path is used for the destination prefix, and next-hop resolution is done for the next direct connection. When the autoroute feature is implemented, the next hop automatically becomes the destination address at the tailend of the tunnel. The drawback of this approach is there is no traffic classification or separation, so all the traffic -- regardless of importance -- is sent through the LSP.

Once MPLS traffic engineering is enabled and autoroute is used, traffic can be inserted only from the ingress node (label-switched router). Any LSR other than the ingress point is unable to insert traffic into the traffic engineering LSP. Thus autoroute can only affect the path selection of the ingress LSR.

Forwarding adjacency
What if we want ingress from all the nodes in the domain, and to be able to calculate the shortest path based on the constraints for the tunnel? Then the MPLS forwarding adjacency functionality might be used.

Once we enable this feature, any MPLS traffic engineering tunnel is seen as a "point-to-point link" from the interior gateway protocol point of view. Even though traffic engineering tunnels are unidirectional, the protocol running over an LSP in one direction should operate in the same way on the return path in a point-to-point configuration.

Orhan Ergun, CCIE, CCDE, is a network architect mostly focused on service providers, data centers, virtualization and security. He has more than 10 years in IT, and has worked on many network design and deployment projects. View Full Bio

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Ninja
5/15/2014 | 8:03:34 PM
The availability of bandwidth is limited and this scarce resource is becoming scarcer day-by-day as the need for businesses to communicate data is only increasing. The provider that manages to provision for the most optimal allocation of network resources will be able to meet their SLA obligations and future planning would also be required. 
Hot Topics
VMware NSX Banks On Security
Marcia Savage, Managing Editor, Network Computing,  8/28/2014
How To Survive In Networking
Susan Fogarty, Editor in Chief,  8/28/2014
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Twitter Feed