One of the big surprises in IT over the last few years has been the slow uptake of 10 gigabit Ethernet. With production available in 2006, we would have expected it to become industry standard by 2009. But with a recession beginning at the end of 2008, the driving force for replacement weakened considerably, and took the sails (and sales) out of the launch.
Four years later with the recession diminishing, we are starting to see signs of a turnaround, with nearly 5 million 10GbE ports shipped in the third quarter of 2013, a 40% growth year over year. Another indication that 10GE has turned a corner is that Supermicro is adding 10GE to motherboards. This can be seen as a bellwether since it reflects demand by a broad spectrum of users; putting 10GE on the motherboard is a sure sign of interest.
Given that networking is the third leg of a performance boost cycle that includes CPU/DRAM, SSD and Ethernet speed, technology is clearly available to uplift the datacenter. The billion dollar question is “Will the customer buy it?” and in what quantities. That’s a function of need, and we should look at the demands on the server farm, and how they are growing.
Big data is the fastest growing segment of IT, and the servers used are data hogs. There’s no question that 10GbE is the entry-level solution for them. Filling the pipe with enough data to keep up with GPUs and PCIe SSD is a challenge.
High-performance computing (HPC) falls in the same category, with InfiniBand as a spoiler in the short term, though work is afoot to make Ethernet look more like IB by adding Remote Direct Memory Access (RDMA) capability to it. Chelsio and Mellanox are pushing interesting, and competitive, alternatives.
But what of the run-of-the-mill server farm doing routine work? We have pent-up demand from a slow-down in replacement due to the recession. The question of 1 gigabit Ethernet or 10GbE is complicated by the target use case for the servers.
The old-fashioned server farm, delivering Web pages, etc., via a LAMP stack is giving way to either cloud servers or microserver boxes. In both cases, there’s enough performance in the box to justify 10GbE when replacing the server. In fact, the opportunity to consolidate many older servers onto a few new servers justifies the additional price.
There's also the issue of future-proofing. With a lot of network gear hitting six years of use and coming up for replacement, it makes good sense to install infrastructure that will support the next six to eight years.
[Read what datacenter operators will focus on this year in "3 Datacenter Trends To Watch In 2014."]
While we can expect a strong crossover of new installations from 1GbE to 10GbE in 2014, there still are some sticking points. SSD has led to servers that are capable of doing a lot more work; at the same time, it's moved the bottleneck from storage to other areas in a server.
In fact, most servers with SSD will have network bottlenecks if they aren’t upgraded to 10GbE. This alone would add pressure for more network bandwidth, but 10GbE NICs generally aren’t cheap, priced at several hundred dollars compared with the OEM price uplift of $25 for a NIC-on-motherboard solution. The result is that generally we are waiting for new servers to roll into place.
Switch ports are also still expensive compared with gigabit Ethernet. The good news is that prices are dropping quite fast, and there is a point where the extra money will be recoverable by using fewer server units to do the work. Switch ports are currently around $125 a port for small switches (8 or 12 port), and are already below $100 for 48 port switches. They should drop to $80 or even lower in 2014, as competition heats up.
10GbE is doing fine on the inter-switch and storage side, but is already being supplanted at the top end by 40GbE, using ganged port quads. Based on 10GbE technology, this adds striping of blocks across the 4 ports and single fiber links to the mix. Performance pressure from the now-popular all-flash arrays is driving that transition, and puts some strategic planning pressure on the IT shop.Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC ... View Full Bio