Network Computing is part of the Informa Tech Division of Informa PLC
Data Center Interconnect Design Considerations
These days, a lot of the services we consume are served from data centers. Web-scale services like Netflix, YouTube and Gmail are likely served from a data center. It’s also becoming increasingly common to host applications like email in the cloud, which means that the data center is more important than ever.
To provide resilient services, it's common to have multiple -- at least two -- data centers in different locations. The locations can be either in the vicinity of each other or in totally different regions. To connect the data centers together, organizations typically use a data center interconnect (DCI).
The DCI can leverage multiple technologies, but first let’s cover some of the terminology and design factors that will guide technology selection for the DCI.
Recovery time objective (RTO) – The RTO defines the amount of time a service can be down. The RTO will impact the selection of an active/active or active/passive data center design. This number should be defined in the business continuity (BC) plan. DC design and the DCI will be more complex for a lower RTO.
Recovery point objective (RPO) – The RPO defines how much data loss is acceptable in case of a failure. This will also affect the storage design. Does the storage traffic have to be actively synchronized to both DCs or not? Depending on the value of the RPO, it may be enough to have storage locally and backup data as opposed to actively writing to storage in two locations. This will determine if the storage network has to be extended, which is normally done outside of the DCI because Fibre Channel (FC) is commonly used for storage data.
Before creating a data center interconnect design, it’s important to understand the business requirements and business drivers and to get accurate numbers for the RTO and RPO. The design should be created to meet the business requirements and not to gold plate the design, meaning creating a design more complex than it needs to be. Do not extend Layer 2 across the DCI unless there is a business requirement to do so. A more complex design will inevitably lead to a higher operational expenditure (OPEX) and more advanced troubleshooting in the event of failures.
Since designing for a Layer 3 DCI is more common and straightforward than Layer 2, the rest of this blog will focus on a Layer 2 design. Here are some of the design considerations and common issues when designing for an Layer 2 DCI:
Ingress traffic optimization – How does traffic get directed into the primary DC? This normally involves manipulating BGP attributes or advertising a longer prefix over the primary DC than the secondary DC.
Localization of gateways – Routed traffic should not have to traverse the DCI to reach its gateway. This can add a lot of latency to the traffic.
Controlling unicast flooding – Unknown unicasts should not be sent across the DCI. Ideally there should be no silent hosts; if there are, create exceptions only for these.
Broadcast reduction at the DC edge – Broadcasts should not be freely flooded over the DCI. A proper DCI technology should implement some form of proxy ARP.
Policing of BUM traffic at the DC edge – Broadcast, unknown unicasts and multicasts should be policed at the edge to prevent a Layer 2 storm form taking out both DCs at the same time.
Reducing the ping-pong effect – Depending on how load balancing and security are set up, traffic may travel over the DCI multiple times, adding latency for each time that traffic goes over the DCI.
Stretching clusters – Should load balancers and firewalls form a cluster across the DCI or not? When a cluster is formed over the DCI, split brain and other serious faults can occur, which may then bring down both DCs.
In my next blog, I'll examine different technologies such as dense wavelength division multiplexing (DWDM), Ethernet over MPLS, Virtual Private LAN Service, Cisco Overlay Transport Virtualization, and VXLAN and their impact on DCI design.
Learn about technologies and vendors critical to the Future of Networking at a two-day summit presented by Packet Pushers at Interop Las Vegas this spring. Don't miss out! Register now for Interop, May 2-6, and receive $200 off.
Recommended For You
AI will be integrated into network operations sooner rather than later. Here’s how that transformation process will likely happen.
Developing and managing a network budget is hard work for network professionals, who often get hit with new projects that they know nothing about. Is there a better way to manage network spending?
Making the jump from outdated legacy technology to a more modern digital infrastructure will allow businesses to innovate at the speed and scale needed in today’s marketplace.