Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

25 GbE Coming To A Data Center Near You

Pity the poor data center architect. In the trends to pervasive cloud computing and the expanding adoption of technology developments like virtualization, SDN, and network convergence, you can see why data center architects can feel their hair turning grey.

Global data center traffic rate will reach 7.7 zettabytes by 2017, according to Cisco's Global Cloud Index. To put that into perspective, it’s about 107 trillion hours of music streaming, 19 trillion hours of web conferencing, or 8 trillion hours of online HD video streaming. Depending on the prediction, there will be 26 billion to 41 billion connected Internet of Things devices by 2020.

Increased application complexity and sophistication, coupled with rapidly rising consumer resource demand, means data centers are facing an escalating array of challenges, the solutions to which are only now beginning to appear. Controlling heating and cooling expenses, managing spiking bandwidth demand, and ensuring uptime and scalability -- all while reducing total cost of ownership -- is a multifaceted challenge that must be addressed.

So what’s the poor data center architect to do? And how can the industry as a whole deal with this newly altered reality?

Enter single-lane 25 Gbps Ethernet (GbE). Single-lane 25 GbE interconnects are one way to enable hyperscale data centers and the cloud services market to further optimize cost while maintaining the hearty, stable performance demanded by enterprises and individual users alike.

25 GbE can deliver robust performance while reducing capex and opex, and slimming network architectures. And Ethernet’s open, common specifications, unparalleled flexibility, and proven reliability makes single-lane 25 GbE the right choice for today’s growing population of Web-scale data centers.

Why 25 GbE? And why now?
Because of their sheer size, hyperscale data centers, with their behemoth-sized facilities, must adopt a heightened sensitivity to cost on all fronts -- capex, opex, and investment.

Web-scale data centers and cloud-based services require servers that have capabilities beyond 10 GbE. And with "cloud" having become synonymous with "cheap" for some, instituting stronger cost containment, particularly for capex and opex, is requisite. Building a data center that has the ability to scale up to meet rapidly advancing performance requirements, bandwidth demand, and reliability expectations while still reducing overall total cost of ownership is a tall, but not impossible, order. 

Using a single lane of 25 Gbps signaling technology, which was defined as part of 100 GbE and is in production today, enables 25 GbE. Single-lane has always been the cost-optimized approach, and single-lane 25 GbE will allow data center architects and operators to maximize efficiency of their interconnects between servers and access switches while minimizing cost. Having just one wire to contend with (rather than four) streamlines architectures and drops power consumption, allowing for true cost-per-bit-per-second optimization, and generating tangible reductions in opex.

When viewed through the cost-efficiency lens, it becomes clear that, with its ability to provide optimum cost-performance server interconnects to first-level networking elements such as top of rack (ToR) and leaf/spine, single-lane 25 GbE is the clear path forward.

Opening the door to 100 GbE
Beyond its ability to impact the bottom line, 25 GbE can provide a benefit by serving as a stepping stone to 100 GbE.

The Ethernet application horizon is becoming more diffuse, with applications moving at varying speeds. With different cadences for each application, Ethernet use models are all over the map. In the data center space, 10 GbE and 40 GbE have both been widely embraced, but smart data center architects and operators are now looking ahead to the 100 GbE era. 

Before that transition becomes feasible, however, the issue of cost must once again be addressed. There’s a great deal of downward price pressure. Calls to reduce or remove much of the expense associated with migrating to 100 GbE, and to map out a navigable, cost-effective path to its implementation, are getting louder. Luckily, the maturity of 25 GbE technologies is such that they can effectively address this cost-sensitivity. Because the shared underlying technologies will drive costs down, the adoption of 25 GbE will in turn propel the adoption of 100 GbE.

One reason the Ethernet ecosystem has been very successful is the careful use of open, common specifications that help to ensure interoperability and the security of development investments. The IEEE 802.3 Working Group is moving as quickly as possible to standardize single-lane 25 GbE, a process made easier by underlying foundational work completed for previous standards. When its standardization is complete, 25 Gbps technologies developed and productized for 100 GbE can be put to use immediately, opening the door to a new age of high-speed, high-performance web-scale computing.

The way forward
With cloud demand growing wildly, the size of the data center had grown as well, driving new performance and cost paradigms. But in order for these things to continue to grow, costs must be driven as low as possible. The way to make that happen is through standardized single-lane 25 GbE. It’s yet another step in Ethernet’s ongoing community- and market-driven evolution to meeting service industry requirements. 

Ethernet is a proven, mature technology with enough muscle and flexibility to handle whatever comes next. With only minimal standardization work left to do and by building on previous projects and technologies, 25 GbE is on a fast track to reality. And by embracing single-lane 25 GbE, data center architects can deliver the reliable, cost-effective performance that hyperscale data centers demand, future-proof their networks, and maybe even save themselves a few grey hairs along the way.