Special Coverage Series

Network Computing

Special Coverage Series


How to Set Up Cisco Nexus Fabric Extender

Cisco Nexus Fabric Extenders (FEXs) provide ToR connectivity for Nexus 5000 and 7000 series switches. This 9-step plan shows you how to bring a FEX online, and includes configuration tips and code examples.

In a data center that has deployed Cisco Nexus 5000 or 7000 switches, Cisco Nexus 2000 series fabric extenders (FEX) are commonly used for top of rack (ToR) connectivity. FEX units are priced attractively, as they serve a limited purpose and are not feature-rich switches. In fact, a FEX is not a switch, in that a FEX cannot switch traffic locally or be managed independently. A FEX functions only when connected to a Cisco Nexus 5K or 7K series. All traffic flowing into a FEX will be sent down to the parent 5K or 7K for forwarding, even if the destination is on the originating FEX. If you think of a FEX as a remote line card with no local switching capabilities, you've got the idea.

Having established that a FEX is not a switch, let's take a look at the process of installing a new FEX and bringing it online.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

1. Rack the FEX. The official Cisco documentation demonstrates racking a FEX in a four-post rack, using the supplied rack ears and rail slides. I have successfully racked FEXs in open two-post racks using only the rack ears. If you choose to do this, be sure to mount the ears further back along the chassis for better weight balance. However you mount the FEX, provide enough clearance for the fiber uplink cables, as they stick out of the front of the chassis. Tight tolerances could prevent the rack door from shutting.

Do not power on the FEX until all cabling and uplink port provisioning on the uplinked 5K or 7K has been completed.

2. Install & cable the uplinks. When uplinking to Nexus 5Ks, one topology option is to dual-home the physical Fabric Extender. Dual-homing a FEX provides path redundancy, but cuts in half the total number of Fabric Extenders you can deploy. A Nexus 5K supports 24 total connected FEX devices, meaning that two 5Ks could support 48 total single-homed physical FEX. When dual-homing, only 24 total FEX are supported between the two 5Ks.

If you choose to single-home the FEX, you lose uplink path redundancy for single-attached hosts. If single-attached hosts are not a concern, a common scenario is to deploy two FEX to ToR, each single-homed to a Nexus 5K or 7K. Multiattached servers then spread their uplinks across the two ToR FEX devices, and in that way enjoy uplink path redundancy. As of this writing, only Nexus 5Ks support dual-homing of FEX.

If you choose to dual-home the FEX, the uplink ports you select must match on both 5K-1 and 5K-2. For example, if you chose Eth1/1 on 5K-1, you must also use Eth1/1 on 5K-2. Another wise design choice is to spread your uplinks over multiple 5500 ASICs (the silicon inside the switch responsible for forwarding traffic). In the Nexus 5500s, ASICs are mapped to groups of 8 consecutive Ethernet ports on the front to the switch, 1-8, 9-16, etc. Therefore, spreading FEX uplinks over ports 1 and 9 is smarter than 1 and 2. If the ASIC servicing ports 1-8 fails, you won't lose both FEX uplinks.

Although options vary by model, FEX can be uplinked with a variety of media. When ordering, be aware of Cisco SKUs that bundle the FEX with uplink media, such as Fabric Extender Transceivers (FETs) or Twinax. Each option has a different price and distance constraint, so research this carefully to be sure you meet your installation’s requirements.

Another consideration is the number of uplinks to use. For example, a 2248TP FEX has 48 10/100/1000 access ports and four 10-Gbps uplink ports. If uplinking all four 10G ports, the oversubscription ratio of access to uplink ports is 1.2:1--very low. At the same time, you’ve used up four expensive 10G ports on the Nexus switch on the other end. Do your traffic patterns warrant using all four uplink ports, or can you get by with just two? Note that from a technical standpoint, the FEX will function correctly with only a single working uplink, but a sensible design uses at least two in a production environment.

The remainder of this example assumes a dual-homed FEX using all four uplinks, connected to a Nexus 5500 pair configured in a virtual port-channel domain.

3. Configure a virtual port-channel and add physical interfaces. Two of the FEX uplinks will homed to one 5K, and two to the other 5K. Then, all four of the FEX uplinks will be combined into a single virtual port channel. Each FEX is assigned a number from 100-199.

The physical interface requires two specific commands to tell the hosting 5K that the interface is servicing a FEX. The command "switchport mode fex-fabric" lets the Nexus switch know that the device on the other end of the link is fabric extender. Note that if you use FETs as uplink media, the switch can’t use these optical modules until this command is in place.

The command "fex associate " tells the Nexus switch which specific FEX is being uplinked to that port. The number selected must match for all uplink ports.

4. Apply the code below to both 5Ks. Once this step is complete, you can optionally prepare your console session to watch the messages that will scroll as the FEX comes up for the first time by typing "term mon".

interface Po101
 description UPLINK FEX-01
 vpc 101
 switchport mode fex-fabric
 fex associate 101
 no shut

interface Ethernet1/1
 description UPLINK FEX-01
 switchport mode fex-fabric
 fex associate 101
 channel-group 101
 no shut

interface Ethernet1/9
 description UPLINK FEX-01
 switchport mode fex-fabric
 fex associate 101
 channel-group 101
 no shut

Next Page: Powering Up

 1 | 2  | Next Page »


Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

2013 SDN Survey: Growing Pains

2013 SDN Survey: Growing Pains

In this report, we'll look at the state of Software-defined networking in the marketplace and examine the survey results. We'll also outline what a typical SDN infrastructure looks like in 2013, including a controller, programmable infrastructure, applications and network multitenancy. We'll dig into both OpenDaylight and OpenFlow, two open source initiatives that are shaping SDN from outside. Finally, We'll offer guidance for network professionals on how to approach bringing SDN into their own environments.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.