Special Coverage Series

Network Computing

Special Coverage Series


Cisco EIGRP OTP Connects Networks Across Provider Infrastructure

The EIGRP routing protocol just got more interesting with the addition of Cisco's Over the Top feature. Here's how it works plus its pros and cons.

Cisco Systems has been hoisting the EIGRP flag up high of late. Enhanced Interior Gateway Routing Protocol--a routing protocol, of all things (yawn)--wouldn't seem like anything to get too excited about, what with all the hoopla surrounding software-defined networking, overlays and network virtualization. And yet Cisco is clearly committed to EIGRP.

First of all, much of EIGRP has moved from proprietary status to open. Open EIGRP was announced earlier this year, when Cisco released significant portions of the EIGRP specification to the open source community in the form of an IETF draft. This move met with mixed reviews and some sideways glances, but it demonstrates Cisco's desire to see EIGRP used even more broadly in the industry. With the widespread acceptance of the OSPF routing protocol, some might wonder what the point is, but EIGRP fans can attest to its flexibility and scalability--EIGRP really stands up.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

Cisco is giving EIGRP users an interesting feature, which was announced in June at Cisco Live. EIGRP Over the Top (OTP) allows EIGRP routers to peer across a service provider infrastructure without the SP's involvement. In fact, with OTP, the provider won't see customer routes at all. EIGRP OTP acts as a provider-independent overlay that transports customer data between the customer's routers.

To the customer, the EIGRP domain is contiguous. A customer's EIGRP router sits at the edge of the provider cloud and peers with another EIGRP router a different location across the cloud. Learned routes feature a next hop of the customer router--not the provider. Good news for service providers is that customers can deploy EIGRP OTP with their involvement.

Inside EIGRP OTP

OTP is a genuinely new feature that Cisco has created for EIGRP, so let's peek under the hood at how it works. There are a few key elements:

• Neighbors are discovered statically. There's no auto-discovery mechanism here. For customers thinking about trying to build an EIGRP mesh across a provider cloud and shuddering at the thought of manually configuring n-1 relationships per router, it's not as bad as all that. OTP does not require a mesh of peering relationships to support a full mesh topology. (See the next point.)

• An OTP mesh scales by use of a route reflector (RR). When designing the EIGRP OTP overlay, a customer selects a router to be a RR. When additional customer routers are added to the OTP overlay, EIGRP is configured on that new router to peer with the RR. The RR takes route advertisements in, and reflects them out to all other EIGRP customer routers in the OTP overlay. The RR preserves the next hop attribute of the route, which is critical. This means that the RR is not going to be the hub of a hub and spoke forwarding topology. Instead, a full forwarding mesh is formed. For example, let's say we've got three routers: R1, R2 and R3. R1 is the RR. R1 is peered with R2 and R3. When R2 advertises a route, let's say 10.2.2.0/24, to R1, R1 reflects 10.2.2.0/24 to R3, preserving R2 as the next hop of that route. When R3 needs to talk to 10.2.2.0/24, it'll connect directly to R2, and not through R1, which reflected the route to it.

• Metrics are preserved across the service provider cloud. In other words, the EIGRP domain treats these neighbors and links just like any other EIGRP neighbors and links. Therefore with OTP, a customer ends up with a contiguous EIGRP domain across the SP cloud. That eliminates a common scenario of isolated EIGRP domains at each customer office being redistributed into the SP cloud, and then redistributed again from the SP cloud into remote office EIGRP domains. OTP also eliminates the scenario of multipoint GRE tunnels (for example, with DMVPN and NHRP) being nailed up as a manually created overlay that runs EIGRP across it. An OTP configuration is comparatively simple to code in IOS.

If you're thinking this through, you might be wondering how the EIGRP customer routers can push traffic for remote subnets across the SP cloud, if the customer routers are not advertising those subnets to the SP. The answer is that traffic between the customer EIGRP routers is encapsulated in LISP packets. Therefore, all the SP needs to know is how to get from customer router to customer router to carry the LISP packets flowing between them; the SP will know this by virtue of the directly connected routes used to uplink the customer routers to the SP cloud.

[What's holding back TRILL adoption? Find out in "TRILL's Hidden Cost."]

Cisco chose LISP to encapsulate traffic between the customer routers because it provided a critical design feature that enables OTP: stateless tunneling in an any-to-any mesh topology. It's worth pointing out that LISP is only used as a data plane transport. In other words, Cisco is not saying that EIGRP OTP has a dependency on LISP. Instead, Cisco just happens to be using LISP as the encapsulation format to smuggle traffic across the SP cloud between customer routers.

A consideration for secure environments is data privacy across the SP cloud, and LISP has no native encryption function. However, OTP can be configured to work with GET VPN to encrypt the LISP traffic flowing between the customer EIGRP routers.

For shops interested in multitenancy, EIGRP OTP supports this via VRFs or Easy Virtual Network (EVN), mapped to (as best as I can tell) LISP multitenancy. LISP packets have header fields that support tagging certain packets to belong to certain tenants (think VPNs). I'm speculating a little on this detail, as I don't have any hard data on exactly how it works.

NEXT: OTP's Shortcomings

 1 | 2  | Next Page »


Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

Research: 2014 State of Server Technology

Research: 2014 State of Server Technology

Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, thatís for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.