Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Network Design: Router Vs. Switch

In doing designs lately, I’ve been increasingly running into situations where I have to think hard about using a traditional router or an L3 switch. I’ve made some comments about this topic in prior blog posts, and now it seems worthwhile to revisit the topic in a little more depth. My objective is to stir up some thought and maybe debate about device types and roles, and how they may be evolving.

Let’s back into this by considering my favorite device for this week: the Nexus 93180YC-EX switch. It does 48 x 1/10/25 Gbps SFP+ ports, + 6 x 40/100 Gbps ports at wire speed (cited as 1.8 Tbps) in 1 RU. With Layer 2 and 3, there’s the capacity to hold a fairly large routing table (but maybe not quite a full internet feed), EIGRP, OSPF, BGP, lots of VRFs, dot1q sub-interfaces, and VPC. What’s not to like?

Yes, you can go cheaper in the Cisco switch line, but most of the cheaper switches have 1 Gbps copper ports, and I wanted something I could leverage in a large CoLo to cross-connect to other buildings (i.e., optical ports). Plus, some of the other strengths listed above.

When you look at routers, getting some ports with an aggregate of say 40 Gbps of throughput may well cost a bit more. So, who needs the router?

Well, not quite so fast there!

The above emphasized what switches do well: VLANs and routing, wire speed, fast, fast, fast. Oh my! (That’s my new mantra.)

To simplify, the switch cost basically reflects the economics of chips in the switch versus chips + efficient CPU processing in the router. Networking chips currently have limiting aspects. For instance, TCAM chips allow very fast lookups of things like routes, ACL, QoS entries, etc. The Nexus 9372 TCAM and coding supported VRF-aware NAT (in a somewhat limited way); the 93180 TCAM does not.

What is a router good for?

What routers may do well (and mid-scale switches likely do not do so well):

  • Non-Ethernet connections (which still do exist)
  • Large routing tables; check the verified scalability document for your favorite Cisco switch — numbers like 8,000, 24,000, or 32,000 prefixes are common
  • Many routing VRFs — campus and datacenter switches don’t need ‘em, at least for common uses
  • NAT, especially with protocol fixups (address in payload situations)
  • Large ACLs without loss of efficiency
  • QoS not tied to 4 or 8 hardware queues (i.e., QoS that can support more than 4, 6, or 8 classes of traffic complexity)
  • QoS policing (switches may do this, with limitations)
  • QoS shaping (complex/not in chipsets)
  • QoS: Layer 7 traffic recognition (Cisco “AVC”)
  • MPLS labels (OK, Nexus 7K M-series line cards are basically routers on a card)
  • Segment routing
  • LISP
  • IPsec or SSL VPN
  • (Other features, but this list is already long enough!)

So, what one has to consider in doing a design is — are you ever going to want something from that list?

network cables

Another consideration: Are you leveraging a feature that a particular switch supports like VRF-aware NAT that might not be supported in its successor? After all, when the vendor product designers plan a new product, they’re going to prioritize what that product niche must do well before they tackle optional items. So, L2 and L3 features might be pretty mainstream; optics support, ditto (beware of a possible lack of support for the longer-haul optics in campus/datacenter switches — not that I’ve run into issues there with Cisco). NAT in a switch — not so much. If that feature has to be dropped due to TCAM, chipset constraints or coding costs, it will be.

One implication: Shifting to Ethernet-based WAN/MAN connections is an enabler for lower costs by using a switch rather than a router to terminate the WAN connections. But you still have to consider the above features.

BACK TO REALITY

Where I’ve seen the switch versus router tension previously was in WAN designs. The biggest decision factor used to be losing the ability to traffic shape at HQ or the datacenter. But as 1 Gbps MAN links got cheap enough, regional all-1 Gbps MAN networks became feasible. QoS for voice, video prioritization might still be useful; but cost-justification gets a little touchier.

AND FIREWALLS?

To some extent, the same issue arises with firewalls: throughput is costly.

A lot of organizations feel firewalls are required at their internet network edge (and internet speeds are generally lower, helping keep firewall throughput costs affordable). However, those doing VMWare NSX may be shifting to very granular ACLs within NSX, which scales cost, performance, operational management, and segmentation, as well — especially when using scale-out to add L4-7 firewall processing. That puts the complexity near the associated VMs.

If you’ve checked the price of a 20-40 Gbps firewall lately, you know that firewalls can get rather pricey. Scale-out distributes that cost within the server farm (and for virtual firewall licenses). Is that a gentler price/performance curve?

If NSX is handling the details, that might permit having edge firewalls doing only IP-centric ACLs for coarse screening. One reason you might do this is to avoid duplicating the details in multiple places, to contain maintenance complexity and costs, and to avoid pointless duplication of effort. Part of the thought process here is that server tags and other features in NSX might make ACLs there a bit more manageable (i.e., like object groups but tied to the actual VM instances or names). Of course, in principle, you then have to audit for naming or tagging consistency … security work never ends!

As a side note, that reflects one thing ISE and NSX potentially have in common: They enable getting away from IP address-based ACLs for security. That’s probably a good thing, in terms of clarity.

For one recent design, we ended up with coarse firewalling at the edge, moderately coarse at the user/edge/datacenter core (which network components can get to which others, basically), and heavier details, as well as micro-segmentation of server-to-server traffic in NSX.

Now here’s where I have to ask an outside-the-box question. If you’re enforcing coarse IP-centric ACLs, that’s something a switch can do fast and cheap, as long as your ACL isn’t too large. If you’re allowing the internet to some DMZ subnets, that’s likely not a big ACL. Add in blocking bogons and other typical edge functions, sand it’s still not that big. BGP accepting default for routing, advertising the site prefix? The L3 switch can do that.

But remove the firewall? That’s heresy, and it might result in a lawsuit if something bad happens! Because I’m lawsuit-averse, I’m not going to recommend not having a firewall here. I’m just considering what the edge firewall actually does for you. Your answer may vary. “Protocol correctness enforcement” or “stateful return consistency” might be part of your answer.

(I’d have written “lawyer-averse,” except one of my sisters and one of my daughters are lawyers, and I’m definitely not averse to them.)

COMBINING THEM

The above led me to think, “What if routers, firewalls, and switches could be combined – aren’t the product boundaries blurring?” Well, there already is an answer to this (aside from the SMB types of all-in-one firewalls). That would be the Cisco ENCS 5000.

I’m mentioning this because it does combine functionality, allowing you to run a virtual router, virtual firewall, virtual wireless controller, etc. (“NFV”, network function virtualization). Performance-wise, it may well fit small- to medium-sized enterprises (i.e., those where the performance fits the requirements). One advantage would be fewer devices in your rack, perhaps switches, and this platform. It also addresses sites that might like the idea of virtual devices, but want to keep them separate from their UCS or other server platforms and avoid shared failure modes.

This is where I also need to say “SD-WAN.” SD-WAN devices seem to be rapidly evolving to include firewall functionality, and typically claim QoS/traffic engineering capabilities and L7 traffic recognition. Switching, not so much. My suspicion is that some are weak on the routing (distributed control and decision-making) side of things, perhaps making purely local decisions with only central policy distribution. Like WAN accelerator/shaping boxes used to do, but evolved. Is doing so a bad idea? It’s hard to tell what SD-WAN vendors are doing without intimate knowledge of multiple SD-WAN products, something I lack at present (ROI: Time it would take to figure it out versus weak desire to know the answer).

From a design perspective, I worry about people designing ad hoc VPN connections and creating unmanageable “VPN spaghetti” with SD-WAN boxes — but haven’t seen it yet. I’ve seen that done a bit with some classic routers and firewalls, but it’s enough work to do that that people generally revert to hub and spoke designs. With SD-WAN, we might encounter the beer principle: Good thing, but easily overdone?

This article originally appeared on the Netcraftsmen blog.