HP EVI vs. Cisco OTV: A Technical Look
November 09, 2012
HP announced two new technologies in the late summer, Multitenant Device Context (MDC) and Ethernet Virtual Interconnect (EVI), that target private clouds. Mike Fratto outlined the business and market positions, particularly in regard to Cisco's Overlay Transport Virtualization (OTV) and Virtual Device Context. However, the technology is also interesting because it's a little different than Cisco's approach. This post will drill into HP's EVI and contrast it with Cisco's OTV, as well as with VPLS.
HP EVI supports Layer 2 Data Center Interconnect (L2 DCI). L2 DCI technology is a broad term for technologies that deliver VLAN extension between data centers. Extending VLANs lets virtual machines move between data centers without changing a VM's IP address (with some restrictions). The use cases for such a capability include business continuity and disaster recovery. For a more extensive discussion of L2 DCI, please see the report The Long-Distance LAN.
- The Untapped Potential of Mobile Apps for Commercial Customers
- Augment your data warehouse with big data solutions
- How crowdsourced testing has changed the game for innovative software companies
- Applying Agile Principles to Smarter Product Development
HP EVI is a MAC-over-GRE-over-IP solution. Ethernet frames are encapsulated into GRE/IP at ingress to the switch. The GRE/IP packets are then routed over the WAN connection between the data centers.
EVI adds a software process to act as control plane to distribute the MAC addresses in each VLAN between the EVI-enabled switch. Thus, the switch in data center A updates the MAC address table in data center B and vice versa. By contrast, in traditional use, Ethernet MAC addresses are auto-discovered as frames are received by the switch.
Because GRE packets are TCP/IP packets they can be routed over any WAN connection, making it widely useful for customers. In a neat bit of synergy, the HP Intelligent Resilient Framework (IRF) chassis redundancy feature means that WAN connections are automatically load-balanced because switches that are clustered in an IRF configuration act as a single switch (a Borg architecture, not an MLAG architecture). Therefore, multiple WAN connections between IRF clusters are automatically load-balanced by the control plane either as LACP bundles or through ECMP IP routing, which is a potential improvement over Cisco's OTV L2 DCI solution.
However, note that load balancing of the end-to-end traffic flow is not straightforward because there are three connections to be considered: LAN-facing, to the servers using MLAG bundles; WAN-facing, where the WAN links go from data center edge switches to the service provider; and intra-WAN, or within the enterprise or service provider WAN. Establishing the load balancing capabilities of each will take some time.
Because HP has chosen to use point-to-point GRE, the EVI edge switch must perform packet replication. Ethernet protocols such as ARP rely heavily on broadcasts to function. In a two-site network this isn't problem, but for three sites or more, the EVI ingress switch needs to replicate a broadcast EVI frame to every site. HP assures me that this can be performed at line rate, for any speed, for any number of data centers. That may be so, but creating full mesh replication for n* (n-1) WAN circuits could result in poor bandwidth utilization in networks that have high volumes of Ethernet broadcasts.
Next Page: Cisco's OTV