Upcoming Events

A Network Computing Webcast:
SSDs and New Storage Options in the Data Center

March 13, 2013
11:00 AM PT / 2:00 PM ET

Solid state is showing up at every level of the storage stack -- as a memory cache, an auxiliary storage tier for hot data that's automatically shuttled between flash and mechanical disk, even as dedicated primary storage, so-called Tier 0. But if funds are limited, where should you use solid state to get the best bang for the buck? In this Network Computing webcast, we'll discuss various deployment options.

Register Now!


Interop Las Vegas 2013
May 6-10, 2013
Mandalay Bay Conference Center
Las Vegas

Attend Interop Las Vegas 2013 and get access to 125+ workshops and conference classes, 350+ exhibiting companies and the latest tech.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

HP EVI vs. Cisco OTV: A Technical Look

HP announced two new technologies in the late summer, Multitenant Device Context (MDC) and Ethernet Virtual Interconnect (EVI), that target private clouds. Mike Fratto outlined the business and market positions, particularly in regard to Cisco's Overlay Transport Virtualization (OTV) and Virtual Device Context. However, the technology is also interesting because it's a little different than Cisco's approach. This post will drill into HP's EVI and contrast it with Cisco's OTV, as well as with VPLS.

HP EVI supports Layer 2 Data Center Interconnect (L2 DCI). L2 DCI technology is a broad term for technologies that deliver VLAN extension between data centers. Extending VLANs lets virtual machines move between data centers without changing a VM's IP address (with some restrictions). The use cases for such a capability include business continuity and disaster recovery. For a more extensive discussion of L2 DCI, please see the report The Long-Distance LAN.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

HP EVI is a MAC-over-GRE-over-IP solution. Ethernet frames are encapsulated into GRE/IP at ingress to the switch. The GRE/IP packets are then routed over the WAN connection between the data centers.

EVI adds a software process to act as control plane to distribute the MAC addresses in each VLAN between the EVI-enabled switch. Thus, the switch in data center A updates the MAC address table in data center B and vice versa. By contrast, in traditional use, Ethernet MAC addresses are auto-discovered as frames are received by the switch.

Because GRE packets are TCP/IP packets they can be routed over any WAN connection, making it widely useful for customers. In a neat bit of synergy, the HP Intelligent Resilient Framework (IRF) chassis redundancy feature means that WAN connections are automatically load-balanced because switches that are clustered in an IRF configuration act as a single switch (a Borg architecture, not an MLAG architecture). Therefore, multiple WAN connections between IRF clusters are automatically load-balanced by the control plane either as LACP bundles or through ECMP IP routing, which is a potential improvement over Cisco's OTV L2 DCI solution.

However, note that load balancing of the end-to-end traffic flow is not straightforward because there are three connections to be considered: LAN-facing, to the servers using MLAG bundles; WAN-facing, where the WAN links go from data center edge switches to the service provider; and intra-WAN, or within the enterprise or service provider WAN. Establishing the load balancing capabilities of each will take some time.

chart: comparing HP EVI with Cisco OTV and VPLS

Because HP has chosen to use point-to-point GRE, the EVI edge switch must perform packet replication. Ethernet protocols such as ARP rely heavily on broadcasts to function. In a two-site network this isn't problem, but for three sites or more, the EVI ingress switch needs to replicate a broadcast EVI frame to every site. HP assures me that this can be performed at line rate, for any speed, for any number of data centers. That may be so, but creating full mesh replication for n* (n-1) WAN circuits could result in poor bandwidth utilization in networks that have high volumes of Ethernet broadcasts.

Next Page: Cisco's OTV


Page:  1 | 2  | Next Page »


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
IaaS Providers
Cloud Computing Comparison
With 17 top vendors and features matrixes covering more than 60 decision points, this is your one-stop shop for an IaaS shortlist.
IaaS Providers

Research and Reports

The Virtual Network
February 2013

Network Computing: February 2013

Upcoming Events



TechWeb Careers