25, 50 And 100 Gigabit Ethernet In The Data Center

Its time to make sense of the plethora of performance Gs in the world of enterprise and cloud data centers.

Keely Spillane

November 19, 2015

4 Min Read
Network Computing logo

The rise of cloud computing and the scale-out of data centers are driving the latest Ethernet speed transitions. Cloud-based big data has increased operator workloads. To meet this demand, data centers have scaled out by adding new bandwidth capabilities in parallel to existing infrastructure. The anticipated rapid growth of 25 and 100 Gigabit Ethernet (GbE) deployments is testament to this trend.

Figure 2:

Long-haul carriers are already shifting to 100 GbE Ethernet fabric to handle the increased data load, as are data center operators for the core of their networks. But within the edge of the data center, most would agree that 100 GbE and even 40 GbE are overkill for server connections whose workloads just need an incremental improvement over 10 GbE networks. That’s one reason why 25 GbE and even 50 GbE have become popular options for inside the data center, even though 40 GbE and 100 GbE already exist. We’ll see shortly why 25 GbE makes more sense than 40 GbE for these applications.

The formation of several new Ethernet bandwidths has less to do with racing to the next top speed and more to do with moving the networking protocol into adjacent markets -- specifically, back into the data center. Let’s consider 25 GbE, 50 GbE and 100 GbE separately to find out why.

25 GbE

Targeted at servers in the cloud data center, the officially designated IEEE 802.3by draft standard for 25 GbE should be finalized by 2016 (see Figure 1). This is a relatively short time frame, thanks to the reuse of components from 10 GbE and 100 GbE.

Figure 1: Ethernet standards roadmapEthernet standards roadmap

Since 40 GbE and 100 GbE already exist, some operators wonder why 25 GbE is needed. The answer has to do with architecture and performance requirements. Existing 100 GbE standard network systems consist of four lanes, each with a bandwidth of 25 Gbps. This four-to-one ratio makes it easier for network operators to build-out their data center with servers linking to 25 GbE switches that then aggregate into 100 GbE uplinks.

In a similar manner, 40 GbE systems are made up of four lanes of 10 GbE. However, many data centers are growing beyond 10 GbE servers, notes Ethernet Alliance Chair John D’Ambrosia. That’s why several chip vendors already offer 25 GbE SerDes transceivers, which will make aggregations for bandwidths of 25 GbE, 50 GbE and 100 GbE Ethernet much more convenient and cost effective due to volume.

50 GbE

While finalization of the IEEE standard for 50 GbE Ethernet is still a ways off (2018 to 2020), several industry consortiums anticipate products starting in 2016. Like 25 GbE technology, 50 GbE will be the next speed offering to link servers to data centers.  According to analyst firm Dell'Oro, both servers and high-performance flash storage systems will need speeds faster than 25 GbE in the next few years.

To help accelerate product offerings for these faster Ethernet technologies, the 25 Gigabit Ethernet Consortium has made its 25 GbE and 50 GbE Ethernet specification royalty-free and open to all data center ecosystem vendors.

The reuse of 25 GbE components from existing 100 GbE networks should make 50 GbE cost effective to implement.  For example, 25 GbE cabling is about the same cost structure as 10 GbE at 2.5X the performance. Similarly, 50 GbE is half of the cost of 40 GbE with 25% increase in performance.

100 GbE

The deployment of 100 GbE continues to grow for the long-haul part of carrier networks that run anywhere from hundreds to tens of kilometers in distance.

But according to a new industry alliance, the 100 GbE architecture would be a great candidate for yet another market. Co-led by Intel and Arista Networks, the 100 Gbps CLR4 Alliance believes that 100 GbE is well suited within large hyper-scale data centers to connect spans of 100 meters up to 2 kilometers.

Other companies are pursuing alternative 100 GbE deployments for data centers. Avago Technologies is part of the CWDM4 MSA Group, which targets a common specification for low-cost 100 Gbps optical interfaces that run up to 2 kilometers in data center applications.  As the network infrastructure transitions to 100 GbE data speeds, data centers will need long-reach, high-density 100 Gbps embedded (optical) connectivity. The MSA uses coarse wavelength division multiplexing (CWDM) technology with 4 lanes of 25 Gbps optically enabled single mode (SMF) fiber. Similarly, the OpenOptics MSA, formed by Ranovus Inc. and Mellanox Technologies, targets data center development of 100 GbE at 2 km.

With several different variants of 100 GbE optical interfaces, one might get confused as to which one to follow or deploy. Data center operators will need to look at what fiber assets they currently have, what spans they need to cross, and weigh the economics of the optical modules to make a decision on which one to deploy.

In the past, speed improvements drove most network component development. Today, handling the enormous flow of data coming through the cloud requires companies to balance increased speed with technology reuse to arrive at an affordable solution.

About the Author(s)

Keely Spillane

Nicholas IlyadisVice President and Chief Technical Officer, Infrastructure & Networking Group, Broadcom Corporation

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights