The Nexus 6004, has four crossbar fabric modules, each providing a cross-connect matrix of 192 x 384 (ingress x egress) 10-Gbps paths. This asymmetric design helps to reduce fabric collisions, as each ingress UPC fabric connection has double the egress UPC fabric connections. Let's look in more detail at the connection between the UPCs and crossbar fabric modules.
1. The ingress UPC connects to all four fabric modules with four connections each, for a total of 16 connections between the ingress UPC and the crossbar fabric.
2. Each of these 16 connections offers 14 Gbps of bandwidth each, for a total of 224 Gbps from ingress UPC to fabric.
3. The four crossbar fabrics connect to the egress UPC with eight connections each, for a total of 32 connections between the egress UPC and the crossbar fabric.
4. Each of these 32 connections offers 14 Gbps of bandwidth each, for a total of 448 Gbps from fabric to egress.
5. A switch fabric scheduler moves packets from the ingress UPC across the crossbar fabric matrix to the egress UPC.
Egress Unified Port Controller
Architecturally, the egress UPC is the same chip as the ingress UPC. However, as the directional flow is different (egress instead of ingress), the buffering structure is also different. There are a total of 16 egress queues: eight for unicast and eight for multicast, each of the eight corresponding to one of eight 802.1p classes of service. If desired, the switch operator can divide the amount of system resources assigned to each queue.
When traffic on the egress UPC becomes congested (packets stacking up as they wait to be delivered), moving traffic out of the egress queues is handled by deficit round-robin, with the exception of one strict-priority queue. Strict priority queues are treated with special care to guarantee timely, jitter-free delivery, while the remaining queues are dequeued according to their weights. To help avoid further congestion, random early detection decides which packets should be marked with explicit congestion notification.
Once a packet is ready to be taken from the egress queue and delivered by the egress UPC to its destination, the same process of encapsulation/decapsulation, destination lookup, framing, and so on follows so that the packet can be sent.
The Cisco Nexus 6004 switch is positioned as a heavy-lifter. In a data center design, the switch can play roles at core, aggregation or access layer, and can be relied upon to deliver traffic with very low latency. The 6004 doesn't ask the network designer to compromise performance simply because a particular type of encapsulation or security policy is applied. Nor is Layer 3 forwarding a bolt-on afterthought; instead, line-rate routing functionality is built into the ASICs, providing a network designer with great flexibility and making the 6004 a candidate for a number of interesting roles in both greenfield and brownfield deployments.
This article summarizes key points from the Cisco Nexus 6004 Switch Architecture document soon to be published by Cisco, which was kind enough to let me review an advanced copy. I also used the Cisco presentation BRKARC-3453, which is available from CiscoLive365.com, as a reference. The illustrations used in this post come from Cisco.