Data Center Interconnect Design Options

An analysis of the various technologies available for Layer 2 DCI, including Cisco Overlay Transport Virtualization.

Daniel Dib

March 3, 2016

5 Min Read
NetworkComputing logo in a gray background | NetworkComputing

In my previous blog, I talked about some terminology and design considerations for L2 data center interconnects. In this blog, I'll look at different options for DCI and how well they fulfill our design requirements for a DCI. I will focus on Layer 2 DCI technologies because these are more complex to implement and have more design considerations than Layer 3 DCI technologies.

If an organization owns its own fiber or rents dark fiber, then any DCI technology would be supported. This is the most expensive option. A fiber or dense wavelength division multiplexing-based solution is often needed to extend storage between two data centers.

Any DCI technology can be used over fiber, such as Ethernet over MPLS, Virtual Private LAN Service, Cisco Overlay Transport Virtualization, and also support running link bundles between the data centers.

Before moving onto DCI technologies, let's look at two clustering technologies that can be used to remove the physical loop caused by the Spanning Tree Protocol (STP). These technologies -- virtual switching system (VSS) and virtual Port Channel (vPC) -- decrease the role of STP but do not completely remove it from the network.

VSS turns two physical devices into single logical device by sharing the control plane between the devices. With VSS, a loop-free topology is created between the data centers. VLANs can then be extended over the port channel between the data centers.

data center interconnect part 2

DCI part 2 image.png

 

It’s also possible to create the same design with Nexus switches and vPC, which makes two Nexus switches appear as one to neighboring devices, but the switches do not actually share the control plane.

vPC and VSS create loop-free topologies, but it’s important to note that they are simply VLAN extension technologies without any optimizations designed for DCI. In fact, neither will help with controlling of unicast flooding. These technologies will flood frames no questions asked if the destination MAC address is not in the MAC address table. They also don't help with broadcast reduction at the data center edge since they don't implement any form of proxying of ARP messages, or policing of BUM traffic at the DC edge. There is no policing of BUM traffic; manual policers would have to be implemented and these would normally be per port and not per VLAN.

Now, let's look at DCI technologies.

EoMPLS

Ethernet over MPLS (EoMPLS) is a point-to-point technology that uses MPLS to encapsulate Ethernet frames. Any frames coming in the pseudo wire on one side gets forwarded out the pseudo wire on the other side. All frames can be forwarded regardless of VLAN (port mode) or VLANs can be selectively forwarded (VLAN mode). An organization can either deploy EoMPLS itself or buy a circuit from a service provider. Because EoMPLS is a transparent service, technologies such as VSS and vPC can be supported over the pseudo wire.

Does EoMPLS fare any better than VSS or vPC as a DCI? In fact, it arguably performs worse since it sends anything out that comes in. It does not MAC learn at all, so anything going in the pseudo wire is guaranteed to go out the pseudo wire on the other side. It also doesn't do anything to reduce broadcasts or to police BUM traffic at the data center edge. The only chance to police traffic would be to implement a policer on the port leading to the pseudo wire.

VPLS

Virtual Private LAN Service (is a multipoint-to-multipoint technology that also uses MPLS for encapsulating frames. From a customer perspective, the VPLS clouds looks like one big switch that forwards frames based on the MAC address. It’s also possible for an organization to deploy VPLS in their network. However, VPLS doesn't help control unicast flooding, reduce broadcasts, or police BUM traffic.

OTV

Overlay Transport Virtualization (OTV) is one of the few technologies designed for DCI. It's a Cisco proprietary protocol that can run over any transport as long as there is IP connectivity. Preferably, there should be support for multicast, but it can run in unicast mode as well.

OTV helps control unicast flooding because it works  on the assumption that there are no silent hosts, so all hosts should be known. If all hosts are known, then there is no need to flood frames. Flooding can be selectively enabled for silent hosts.

OTV devices also help reduce broadcasts. They can snoop ARP responses and cache locally so that when the next host ARPs for the MAC, it will already be known by the OTV device, which can answer locally.

OTV does not have built-in policers for BUM traffic at the data center edge, but since it separates STP domains and does not flood unknown unicasts, a Layer 2 storm would not have the same impact as it would with the other  technologies.

VXLAN

Virtual Extensible LAN (VXLAN) is a technology used to scale beyond 4096 VLANs and build L2 networks over a L3 infrastructure -- a so-called overlay protocol. Since VXLAN can ride over any IP transport, it's possible to extend L2 domains and use VXLAN as a DCI. With VXLAN, controlling of unicast flooding  is somewhat controlled by having the tunnel endpoints join a multicast group, but it doesn’t truly prevent flooding of frames.

The implementation of VXLAN may or may not support caching of ARP responses and it’s an optional feature that has to be enabled for broadcast reduction. VXLAN does not do any policing of BUM traffic.

Before extending L2 between your data centers, make sure that your business drivers and recovery objectives require this added complexity. For high availability, an organization should use a Layer 3 DCI. But you may not be able to avoid the complexity of a L2 DCI, either because of poorly designed application or virtualized workloads.

In that case, OTV is your best current option. Future releases of VXLAN and Ethernet VPN (EVPN) may prove to be as good as OTV or at least good enough for L2 DCI. Extending L2 between data centers will always add the risk of both data centers collapsing at the same time. Choose a DCI design that is validated to meet your needs.

Learn about technologies and vendors critical to the Future of Networking at a two-day summit presented by Packet Pushers at Interop Las Vegas this spring. Don't miss out! Register now for Interop, May 2-6, and receive $200 off.

About the Author

Daniel Dib

Senior Network Architect, Conscia Netsafe

Daniel Dib is a senior network architect at Conscia Netsafe. He is CCIE #37149 in routing and switching and also holds an Associates degree in computer networking.Daniel works mostly with network designs and acts as a subject matter expert for his customers. He has expert knowledge in BGP, IGPs, multicast and
fast convergence. Daniel is currently working on achieving his CCDE certification. He is active in social media and believes in giving back to the community.You can read Daniel's other content at lostintransit.se and follow him at Twitter @danieldibswe .

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights