Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Arista's 7500 Data Center Switch: High Speed, Small Profile

Arista's new 7500 is a high-capacity, high-speed switch that the company is targeting a as a data center interconnect. Unlike the Campus LAN where Cisco commands a large percentage of the ports, the data center network is much more in play. Many networking vendors are promoting a flattening, but doing so requires a capacity switch in the center that can handle the total load gracefully. The Arista 7500  certainly fills that requirement. The biggest problem you may have is running 384 cables to the switch.

Arista jammed a lot of power into an 11RU chassis. 384 10Gb ports and 10Tb of switching capacity with 3.4 microseconds port-to-port with 64 byte packets. The switch can move 5.76 million packet per second. Each line card as 2.3 GB of RAM for packet buffering- -enough for 50 ms of packets per/10Gb switch port. The 7500 has front to back cooling, an important feature for managing cooling in equipment racks and dual supervisor modules for redundant management. Pricing starts at $140,000.

Who needs that kind of power? The obvious answer is financial institutions in engaged in trading where microseconds matter and HPC applications like research and modeling. Phil Papadapoulos, director of UC Systems, San Diego Computing Center, has been using an Arista 7500 for several months. The researchers he supports often need to analyze terabytes of data that are striped across many disks on a SAN. "The 7500 makes a good interconnect between server clusters and storage. What's important to us is that we can bi-sect the 7500 and burst 192 ports to 192 ports."

Your data center may start to look more like an HPC cluster in the future. Virtualization lets you consolidate several servers onto a single platform. To maintain adequate performance, the hypervisor server needs high-capacity networking and storage to keep up with the virtual machines. One of the reasons why companies want to stick with fiber channel is that it is high-capacity network separate from the data network that has a predictable low latency required for good storage performance. "Approximately 4.5 microsecond port-to-port latency," Papadapoulos said, "is close to the 1.7 to 2.2 microseconds latency with Infiniband. In many cases, the approximately 10 percent difference is not important." With a  low latency and the large buffers per port, the 7500 should be able to keep up with any data you throw at it.