Starting from scratch, as Cisco did with the UCS, they have a somewhat easier time than competitors Dell, HP and IBM do in building up their product portfolio, since they have no legacy products to support -- yet.
But it does mean buying into the while Cisco vision--servers, storage and networking--to really make a go of it. Dell, HP and IBM want you to do the same, but with their array of partners, including (in some cases) Cisco, you have more choice and the potential for a more customized and targeted overall solution.
Cisco is pushing the concept of any port, any server, anywhere in the data center. In a way, abstracting the physical hardware away from server location. Cisco hasn't gone as far as now defunct Liquid Computing, which used Non-Uniform Memory Access (NUMA) technology to distributed computing across any set of servers sharing CPU, memory and I/O as needed. But with this announcement, Cisco is making the networking more seamless and flexible.
The FEX-Link is the latest iteration of connectivity between Cisco's UCS blades and the network, and it really describes an architecture rather than a product. FEX-link is a fabric extension which uses low-cost physical hardware to interconnect a server to a Nexus 5000, which provides framing, forwarding, routing and other network services. For servers that are connected to the same Nexus 5000, network traffic travels from the blade through the chassis's Nexus 2000 FEX to an UCS 6100 to a Nexus 5000 and back again. As one representative put it, "FEX is a line card having an out-of-chassis experience."
Compared to other chassis-to-network architectures that put the switch functionality on the blade itself, allowing interswtich connectivity directly and leaving only up-links for traffic destined for servers or clients elsewhere, the FEX-link architecture seems overly-rigid. But since the Nexus 5000 is doing the switching, it can handle up to 384 10Gb or 576 100/1000 Ethernet servers in a single switched domain. To make this happen, Cisco has to ensure there is adequate capacity between the servers and the Nexus so that there are no bottle necks.
The capacity between the UCS blades and the UCS 6100 Series Fabric Interconnect has been increased to 160 Gbps--80 Gbps in both directions--an increase of 4x. The interconnects are also active/active meaning the Nexus 2000 can be connected to different UCS 6100s. The additional capacity doesn't require a new UCS5100 chassis, but will require new mezzanine interface cards that connect the blade servers to the UCS backplane.
To read the rest of the story, go to Network Computing.