If you're beginning to feel the bandwidth pinch while operating Gigabit Ethernet and 10 GbE within your data center, a logical step is to consider 25-, 40-, 50- and even 100-Gbps links. But while all four are excellent new technologies that can significantly boost throughput capacity, there are a few things to keep in mind. Let's look at three key considerations before you make the jump to new Ethernet standards.
As individual port speeds increase beyond the 10-gig threshold, we must be cautious about our physical fiber optic cabling plant to ensure it's compatible with the latest in optical transport technologies.
The latest Ethernet over fiber standards require special fiber cables to operate. In most data centers, multi-mode fiber (MMF) is the dominant connectivity option. But not all MMF is the same. The most common fiber classification standard is ISO 11801, which defines four distinct classes of multi-mode cabling -- OM1 through OM4. Legacy data centers that operate fiber networks up to Gigabit Ethernet may be running fiber classified as OM1 and OM2. The problem is, all the new standardized Ethernet-over-MMF standards require "laser-optimized" OM3 and OM4 cabling. It's also quite possible to have a mixture of OM1, 2 and 3 throughout a data center.
Your data center’s physical fiber plant must be thoroughly audited to see whether it's capable of handling the latest in optical transport. If it's not, be ready to pay a hefty price to upgrade your fiber interconnects. Also keep in mind that it is possible to run certain 10 GbE fiber transceivers over OM1 and OM2 cabling. So just because you are running 10-Gbps in your data center today, it doesn't mean that your fiber is compatible with the newer Ethernet standards.
If a cabling upgrade is too much for your organization to take on, consider alternatives for squeezing out more bandwidth from your existing infrastructure. One potential workaround is to virtually combine multiple 1- or 10-Gbps links using link aggregation. If you have the port capacity and extra fiber cabling to pull it off, link aggregation is a quick and relatively low-cost way to combine up to eight 1- or 10-Gbps links that operate as a single 8- or 80-Gbps connection.
While link aggregation may not be suitable for every situation, it’s a great way to delay the implementation of emerging Ethernet technologies until prices drop.
Beyond the data center
By moving to higher throughput links in the data center, you could inadvertently create potential bottlenecks in other parts of the network. So it's important to look at your network transport from a big-picture perspective, reviewing throughput needs not only within the data center, but also outside it. While it's true that a significant boost in data center bandwidth demands is due to the continued distributed nature of applications and data storage, we shouldn’t forget about connectivity to LAN, WAN and internet-based users who access our applications. This north-south traffic is not only increasing, it’s also moving.
In many cases, the growth of a mobile workforce is shifting north-south data flows from the local LAN, out to the WAN and internet. End users are accessing data center applications and data via remote WAN locations or through the internet at a rate never before seen. Therefore, as you calculate your move to larger Ethernet pipes inside the data center, the same investigative work and calculations should be made at other points of the network. Otherwise, you risk the potential for bandwidth bottlenecks.
It’s safe to say that for most, the jump to Ethernet links above the 10-Gbps mark is going to take some serious planning. In many cases, a rolling migration is going to be the ideal option. But no matter how you decide to make your move, understand that it’s something that has to happen eventually, so you may as well start planning now.