DATA CENTERS

  • 06/29/2015
    8:00 AM
  • Rating: 
    0 votes
    +
    Vote up!
    -
    Vote down!

Why Hardware Still Counts In Networking

Software is revolutionizing network architecture, but hardware remains a critical part of the equation for data center scalability.

As a host of the networking podcast Packet Pushers, I receive lots of interesting e-mail. Listeners tell us how we’re doing, share their knowledge, and voice opinions. One opinion that’s come up lately I will describe as an aversion to hardware. In the minds of some, software is king; code is a networking cure-all that will take us into the future.  

Chris Wahl, a fellow writer and engineer, told me he's also heard this anti-hardware sentiment. “Did the bad ASIC hurt you?” he joked, as we tried to understand the software bias.

There is no doubt that much of the revolution in network architecture is coming from software. Great code is bringing useful ideas to life that are moving networking ahead. However, hardware still plays a critical role in networking. I'll explain why, but first, let's review the pro-software arguments. Here's how I understand them; feel free to counter my thinking in the comments section. 

  • x86 is fast enough. General purpose, x86-based CPUs are adequately fast for networking, fast enough to fill 10 Gbps or more these days, assuming efficient code.
  • APIs are catalysts. One software component can talk to another software component via an application programmatic interface (API). Therefore, APIs are the catalysts to a bright, software-defined tomorrow. Developers can use APIs to stitch modules together into a richly capable software fabric. This fabric will deliver networking features never before realized.
  • Love for code is the harbinger of change. We’re seeing an increasing number of open source projects such as OpenDaylight and ONOS as well as startup companies in the networking industry basing their value proposition on  software. It’s almost unfashionable to get excited about new metal.
  • The SDN paradigm is changing how networking is done. Around SDN, we see new ways of thinking about networking based around a controller that arbitrates between smart software and underlying hardware (among other things). A great deal of effort is going into abstraction layers that attempt to make the underlying hardware uninteresting. The ultimate expression of this could be white-box switching, where the silicon is taken for granted, and the software programming the white-box infrastructure brings the unique value.
  • SD-WAN is software’s poster child. As I continue to research the nascent SD-WAN market, I see it as the poster child for networking software. Powerful policy software rethinks traffic forwarding. That policy is distributed to software forwarders running on COTS x86 hardware or virtualized to run on a hypervisor. No custom ASICs required.

I have no arguments with any of these points, as they stand. Still, software needs to run on hardware and x86 presents a scaling limitation. There’s a reason data center switches aren’t based on an x86 architecture. Custom ASICs are required to do packet forwarding operations at line-rate across high-density Ethernet switches. This also explains why SD-WAN is doing so well as a pure software-on-commodity-hardware play: SD-WAN neither requires especially high throughput, nor operates at a high-port density.

The industry has not lost sight of hardware’s ongoing critical role in networking:

  • OpenFlow development has slowed, in part, to allow silicon manufacturers and standards writers to achieve parity. As OpenFlow currently stands, different operations result in different levels of performance, all depending on the silicon the operation is run against. Expect the next generations of chips and OpenFlow standards to present far fewer performance compromises than are experienced today.
  • Hardware ASICs are dedicated to a purpose, and do not share their resources with other processes. In the context of soft switching, a hypervisor vSwitch must share x86 resources with every other process running on the box. More network throughput means less CPU available for the rest of the system. Solutions that offload network processing to hardware like Netronome become key for scaling.
  • Service providers are looking to the silicon industry to facilitate NFV at massive scale. I've  had briefings recently with both Freescale and ARM executives, who have discussed L4-7 acceleration in silicon, targeting service provider needs as they retool.
  • Recent M&A activity in the semiconductor industry highlights an active market with high valuations consolidating into silicon behemoths with full product lines -- one stop shopping for their customers. This includes the mighty Broadcom, a name that has become synonymous with merchant silicon in the data center switching space.

So, yes, software is changing the face of networking. There is no question about that. But in order to work at the scale that the industry requires, hardware still matters. I do not see this symbiosis changing anytime soon.


Comments

Yes - Hardware still counts!

Ethan,

Coming from a "software only" company (CPLANE NETWORKS) you might think it a little strange that I would be in total agreement with your position. Yes, hardware is important. As you mentioned, for some applications x86 is sufficient, and for others only ASICs will be able provide the level of performance required. And I don't see that changing any time soon. x86 performance will continue to improve, but so will the rest of the semiconductor world. There will always be that performance differential that counts in some applications.

What we as an industry can hope for is that it becomes easier to take advantage of the specialized hardware features. If we can figure out how to more easily exploit the power and features of specialized ASICs then the industry as a whole will gain. That's where the software industry can add value, especially in the L4-L7 services that you mentioned in your Freescale and ARM references. If we have the ability to program (exploit) those features externally as opposed to making them embedded, then hopefully the prices of specialized hardware will come more in line (there will always be a delta) with white boxes. And, if we're smart enough to intelligently abstract services, then common application code can take advantage of the underlying infrastructure simply through policy/configuration - not hardware-speicific logic. It may take a while, but I think we'll get there.

Robert Keahey

Agree - and perhaps it use specific

I agree with your balanced view. I have been a fan of software as well, and perhaps it has received a disproportionate amount of attention.   With regards to  x86 vs. custom ASICs, as you said it's an issue of use-cases. SD-WAN is gaining traction since it's not that demanding on packet processing. However, I also think that it provides a clear ROI, so that's why it's getting attention.

Re: Agree - and perhaps it use specific

It is refreashing to hear Hardware still counts,  I have not heard any Networking Engineers in my encounters speak of SDN solutions.   

Through put rates are the major concern, so how does SDN address this cost effectively ? 

Re: Agree - and perhaps it use specific

All else being equal, it's difficult for a pure software system (like an overlay) to beat hardware performance.  But if you include system that combine hardware network processors along with an SDN style controller, then you can get high throughput  Cisco's ACI is an example of that but maybe that's not in in line with your pure software definition of a SDN -- but it shares a lot of characteristics of smart software, APIs, etc. that was discussed..  

Another way to look at this is with something like Intel's DPDK where you have a close coupling of the dataplane with the software.  They claim that a single Xeon processor can handle 80 MBps packet processing. But as the Ethan's post said, it shares the CPU with other tasks.

 

Re: Agree - and perhaps it use specific

@dan_conde    Thanks for the clarification - it appears that for the most part an SDN style controller is the crux of this issue of throughput and SDN deployment.

Re: Agree - and perhaps it use specific

You're welcome. Regarding a controller -- it's partially correct. Some SDN systems require frequent access to the controller to figure out the destination where packets are forwarded to. In that case, the controller can become a bottleneck, but not all systems are like that, and can work more efficiently.

Re: Agree - and perhaps it use specific

You're welcome. Regarding a controller -- it's partially correct. Some SDN systems require frequent access to the controller to figure out the destination where packets are forwarded to. In that case, the controller can become a bottleneck, but not all systems are like that, and can work more efficiently.

Re: Agree - and perhaps it use specific

In turn these protocols can influence the overall architecture of the network like in OpenFlow which attempts to completely centralize packet-forwarding decisions.

Re: Agree - and perhaps it use specific

In turn these protocols can influence the overall architecture of the network like in OpenFlow which attempts to completely centralize packet-forwarding decisions.