Data Center Interconnect Design Considerations

Avoid problems by taking these factors into account when designing Layer 2 DCIs.

Daniel Dib

February 10, 2016

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

These days, a lot of the services we consume are served from data centers. Web-scale services like Netflix, YouTube and Gmail are likely served from a data center. It’s also becoming increasingly common to host applications like email in the cloud, which means that the data center is more important than ever.

To provide resilient services, it's common to have multiple -- at least two --  data centers in different locations. The locations can be either in the vicinity of each other or in totally different regions. To connect the data centers together, organizations typically use a data center interconnect (DCI).

data center interconnect

DCI figure 1.png

The DCI can leverage multiple technologies, but first let’s cover some of the terminology and design factors that will guide technology selection for the DCI.

Recovery time objective (RTO) – The RTO defines the amount of time a service can be down. The RTO will impact the selection of an active/active or active/passive data center design. This number should be defined in the business continuity (BC) plan. DC design and the DCI will be more complex  for a lower RTO.

Recovery point objective (RPO) – The RPO defines how much data loss is acceptable in case of a failure. This will also affect the storage design. Does the storage traffic have to be actively synchronized to both DCs or not? Depending on the value of the RPO, it may be enough to have storage locally and backup data as opposed to actively writing to storage in two locations. This will determine if the storage network has to be extended, which is normally done outside of the DCI because Fibre Channel (FC) is commonly used for storage data.

Before creating a data center interconnect design, it’s important to understand the business requirements and business drivers and to get accurate numbers for the RTO and RPO. The design should be created to meet the business requirements and not to gold plate the design, meaning creating a design more complex than it needs to be. Do not extend Layer 2 across the DCI unless there is a business requirement to do so. A more complex design will inevitably lead to a higher operational expenditure (OPEX) and more advanced troubleshooting in the event of failures.

Since designing for a Layer 3 DCI is more common and straightforward than Layer 2, the rest of this blog will focus on a Layer 2 design. Here are some of the design considerations and common issues when designing for an Layer 2 DCI:

Ingress traffic optimization – How does traffic get directed into the primary DC? This normally involves manipulating BGP attributes or advertising a longer prefix over the primary DC than the secondary DC.

Localization of gateways – Routed traffic should not have to traverse the DCI to reach its gateway. This can add a lot of latency to the traffic.

Controlling unicast flooding – Unknown unicasts should not be sent across the DCI. Ideally there should be no silent hosts; if there are, create exceptions only for these.

Broadcast reduction at the DC edge – Broadcasts should not be freely flooded over the DCI. A proper DCI technology should implement some form of proxy ARP.

Policing of BUM traffic at the DC edge – Broadcast, unknown unicasts and multicasts should be policed at the edge to prevent a Layer 2 storm form taking out both DCs at the same time.

Reducing the ping-pong effect – Depending on how load balancing and security are set up, traffic may travel over the DCI multiple times, adding latency for each time that traffic goes over the DCI.

Stretching clusters – Should load balancers and firewalls form a cluster across the DCI or not? When a cluster is formed over the DCI, split brain and other serious faults can occur, which may then bring down both DCs.

In my next blog, I'll examine different technologies such as dense wavelength division multiplexing (DWDM), Ethernet over MPLS, Virtual Private LAN Service, Cisco Overlay Transport Virtualization, and VXLAN and their impact on DCI design.

Interop logo

interop-las-vegas-small-logo.jpg

Learn about technologies and vendors critical to the Future of Networking at a two-day summit presented by Packet Pushers at Interop Las Vegas this spring. Don't miss out! Register now for Interop, May 2-6, and receive $200 off.

About the Author

Daniel Dib

Senior Network Architect, Conscia Netsafe

Daniel Dib is a senior network architect at Conscia Netsafe. He is CCIE #37149 in routing and switching and also holds an Associates degree in computer networking.Daniel works mostly with network designs and acts as a subject matter expert for his customers. He has expert knowledge in BGP, IGPs, multicast and
fast convergence. Daniel is currently working on achieving his CCDE certification. He is active in social media and believes in giving back to the community.You can read Daniel's other content at lostintransit.se and follow him at Twitter @danieldibswe .

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights