The 25 Gigabit Ethernet Rollout

Expect the new Ethernet standard show up quickly in data centers and campus networks.

UNH IOL

October 17, 2016

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

IP traffic is predicted to grow 10 fold in less than 10 years. This fact alone speaks to why even incremental increases in Ethernet speeds are important. The ever-rising demand for data is driven by cloud services and already can be seen in early Internet of Things devices. This trend helped lead to development of the 25 Gigabit Ethernet data rates. Even though the technology is rather new, in the two years since the 25 Gigabit Ethernet Consortium was created, server and switch vendors have already launched devices into the market.

The rollout for 25 GbE is expected to be fairly rapid as the specification was driven heavily by the needs of hyper-scale and cloud-scale focused data centers. This quick rollout is also due to the fact that the 25 GbE specification leverages capabilities from the IEEE 802.3 40/100G project as well as capabilities from other data center solutions.

One example is the single lane solution, SFP28, the small form factor module already developed for 32G Fibre Channel. This port type is used either in stand-alone mode or in the data center environment via breakout from a QSFP28 to four SFP28 ports. This allows both users and equipment manufacturers to continue to use the same form factor on their ports, but provides a simple upgrade path for systems that can take advantage of a full 100 Gbps link.

Cost is another major factor behind a quick 25 GbE rollout. While 25 GbE brings the bandwidth up 2.5x from 10 GbE, the cost per port is likely to increase by a factor of 1.5x or less. This is a much higher value proposition than the upgrade to a 40 GbE port. With the anticipated growth in IP traffic, this type of scalability is crucial. Another major cost benefit is that 25 GbE can utilize existing optical plants -- depending on what was installed -- and increase the bandwidth without changing all of the physical infrastructure.

On the practical side, many current applications can easily exceed the 10 Gbps rate available on a 10 GbE interface, but may not need the full bandwidth of a 40 or 100 GbE device. For servers, the bandwidth on the PCIe bus is certainly more in alignment with 25 GbE than 10GbE. The 25 GbE interface offers a happy medium to take advantage a speed increase with minimal pain for early adopters.

null

ethernet.jpg

In short, the rollout for 25 GbE is really the outcome of aligning the physical layers of Ethernet with developments in switch fabrics, which tend be developed with a per-lane or per pin bandwidth mindset. By increasing the data rate from 10 Gbps to 25 Gbps and allowing single lane variants of 25 GbE, the useable bandwidth of the switch fabric increases. While 40 GbE is limited by the 10 Gbps bandwidth and 100 GbE is limited by the requirement of four lane signaling, a single 25 Gbps MAC is important because it offers maximum flexibility to the data center and campus, from both a speed and port-type perspective.

For campus deployments, the 25 GbE Single Mode Fiber solution -- which is still in the process of being standardized -- is more applicable. It offers the ability to traverse much longer distances than other flavors of 25 GbE, including reaches of 10 and 40 km.

There are two major use cases for this. The first is as an upgrade to existing 10 GbE campus links, which comes with all of the advantages listed above. The second, where a campus is already using a 100 GbE backbone for distribution, enables use of 25 GbE for smaller distribution branches, which is a particularly helpful and needed advancement. Both of these use cases are especially important since complete “forklift” upgrades are rare, and this option increases the toolbox of speed options to enhance campus deployments.

The move to 25 GbE coordinates with the future rollout of the 400 Gigabit Ethernet standard; similar to the 100 GbE specification, 400 GbE initially will rely on multiple lanes of 25 Gbps to achieve those rates. Overall, 25 GbE is a powerful example of the evolving role of Ethernet in the campus and data center ecosystem.

Jeff Lapak is the Enterprise Industry and Operations Strategic Manager at the University of New Hampshire InterOperability Laboratory (UNH-IOL). In his role, he manages and oversees all Ethernet related testing and consortia, providing administration and coordination of test events. He also works with several industry forums and standards bodies to further their standards development. Jeff was recently appointed as the Associate Director of the UNH-IOL.

Currently, Jeff is closely involved with the IEEE 802.3 Working Group, IEEE P1904.1, InfiniBand, and the Ethernet Alliance. He has held various roles as both Chair and editor for these standards bodies as well as helping them develop and maintain their standards, set up testing events, and support industry events.Jeff holds a BS in Electrical Engineering and an MBA from the University of New Hampshire, Durham.

About the Author

UNH  IOL

UNH-IOLThe University of New Hampshire InterOperability Laboratory (UNH-IOL) tests networking and data communications products. The university established the laboratory in 1988 with the dual mission of providing a neutral environment to foster multi-vendor interoperability and conformance to data communications networking standards while educating students for future employment in the industry. The laboratory has since grown into one of the industry's premier independent proving grounds for new technologies.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights