Merchant Silicon About to Get Smarter
April 18, 2013
Because SDN moves packet processing intelligence into central controllers, some deduce that data center switches will become nothing more than dumb forwarding engines—and hence subject to the brutal economics of commodity hardware, in which every box looks and acts basically the same because they're all designed for cost and simplicity and built with similar components.
The 2012 InformationWeek SDN survey found that most of our readers don't buy this conclusion; only 29% were convinced that SDN implied the eventual dumbing down of routers and switches, albeit a sizeable 34% weren't sure. Still, a plurality do believe that SDN will lead to somewhat lower prices and less hardware differentiation, undoubtedly due to the increased use of standard parts.
- Are You Really Ready for a Software Audit?
- Want Information Fast or Want it Right? Learn How to Have Both
White PapersMore >>
Yet as Greg Ferro wrote over a year ago, the trend to merchant silicon began far before SDN came on the scene. It will continue whether data center LANs all migrate to SDN models. The reason has more to do with semiconductor economics than network architecture preferences: chips are hard to design, expensive to build and difficult to update. Tech companies, whether they make CPUs, memory devices or network hardware, need to amortize that effort and expense across as many devices as possible.
In the ultimate irony, SDN could actually lead merchant-based designs to require more sophisticated and programmable switch silicon. There are a couple reasons for this. First, just because SDN moves forwarding decisions into a central controller doesn't mean there isn't plenty of communication processing that must take place between controller and switch. And unless you don't give a whit about security, this control plane chatter must be encrypted. Second, any move to an SDN-architected data center won't happen overnight, meaning that most switches will be hybrid devices, performing some traditional L2/L3 switching and routing and some SDN-based forwarding.
Unless data center networks are carefully redesigned to isolate legacy LANs from greenfield SDN/OpenFlow infrastructure, edge and aggregation switches will end up managing a mix of traditional and OpenFlow traffic. Because it's impossible to predict what type of protocols and packet handling switches will be asked to do over their lifespan (witness the heavy reliance on tunneling and address translation due to increased use of virtual networks, private networks and 6to4 IPv6 encapsulation), this means it's unwise to hardcode functionality in merchant silicon.
Arista, poster child for the merchant silicon uprising, sees software as a key product differentiator, one reason it selected, the Intel (Fulcrum) FM6000 series programmable switch silicon for its latest 7150 series products. The chip's programmable parser [PDF whitepaper] allows traffic to be directed to different processing pipelines for OpenFlow or traditional traffic at line rate speeds. Intel's sister FM7000 series adds hardware support for NAT translation, including ECMP load balancing of private, NATed addresses, IP tunneling and 6to4 encapsulation.
Even in pure OpenFlow environments, switches will still be responsible for some serious local processing. For example, centralized SDN networks present a number of new security challenges, because an attacker that intercepted control plane traffic could not only map out the entire network topology, but modify flow forwarding to steal data or just create the mother of all DoS attacks in which the network essentially attacks itself like an autoimmune system gone crazy.
As Chris Hoff points out in his Rational Survivability blog, segregating the data and control planes "means we now have additional (or at least bifurcated) attack surfaces and threat vectors. And not unlike compute-centric virtualization, the C&C channels for network operation represents a juicy target." Thus, it's imperative that control plane traffic be encrypted, but running SSL/TLS on every switch, even if it's only a few control ports, turns each one into a mini-VPN gateway with the concomitant processing overhead.
While the Intel/Fulcrum parts and Juniper's Trio ASIC [PDF] incorporate programmable elements, the next stage in merchant silicon evolution will incorporate actual CPU cores.
While it may be the first, LSI's just announced Axxia 4500 won't be the last ARM-based switching and communications processor. The chips combine LSI's custom network accelerator modules with two to four ARM Cortex A15 cores, all connected by a new Corelink interconnect optimized for high-speed communications and cache coherency in multi-core SoCs, forming what LSI calls a “virtual pipeline task ring” that Marketing Director Troy Bailey says can pass many packets directly to acceleration modules without making a round trip to a CPU core for packet analysis and classification.
[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]
The chip, with up to four 1.6 GHz A15 cores, includes a 50 Gbps packet processor, 20 Gbps security engine and 10 Gbps DPI security engine, all several times faster than existing LSI chips, along with a 10-port non-blocking L2 10 GbE switch.
But the real potential is in those ARM cores. LSI provides an SDK with C APIs allowing equipment vendors to customize the product to handle various network stacks and middleware applications. And if software customization isn't good enough, Bailey says the firm offers the chip's various subsystems as standard ASIC cells that can be mixed and matched as needed into something of a hybrid standard/custom silicon solution. While the device won't be sampling until the end of the year, likely showing up in products in 2014, it presages what will certainly be a trend in the network component business.
What ARM did for mobile processors, it's about to do in network hardware. While network equipment vendors pushing proprietary hardware will be forced to adapt, customers will reap commodification's benefits: better, faster, cheaper hardware.
Kurt Marko is an IT pro with broad experience, from chip design to IT systems.