Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

40GB Infiniband Demonstration At The International Supercomputing Conference

The High Performance Computing Advisory Council, an
international group of vendors and research labs dedicated to researching and promoting
HPC use, is demonstrating a multi-vendor 40GB Infiniband network integrating
computing, graphic processor units, and networking to show interoperability at the International Supercomputing Conference '09.
HPC is used by researcher organization such as Lawrence Livermore National
Laboratory and Swiss National Supercomputing Centre CSCS and universities like Cornell
University Center for Advanced Computing and Ohio State University.

Infiniband is used in HPC because it is high speed, upto
96Gb/s, serial connection between two nodes with latencies below one
millisecond. Infiniband switches interconnect nodes in a switched fabric with
multiple paths between nodes. The switch fabric avoids congestion and ensures
that the full bandwidth can be utilized when needed.  Besides HPC applications, Infiniband can also
be used for IO virtualization by interconnecting the computer memory bus via
host channel adapters to the Infiniband network. Target  channel adapters interconnect  to IO modules like Ethernet NIC's and storage
controllers. In addition RAM can be pooled and shared.

Two key demonstrations are application sharing graphic
processors for distributed modeling and simulation and remote HPC desktop. Complex
analysis such as financial modeling and geographic simulations requires special
processors to complete their tasks. The demonstration shows an application
leveraging graphic processor units (GPU)s in many servers to complete a
simulation. Rather than running a computation and storing the results in a
database, GPU's and interact directly significantly increasing processing time.
The other demonstration is using remote desktop to allow  a high powered computer run three dimensional
graphics and send the results in real time to multiple desktops in high
definition.

For the demonstration, Mellanox is providing their IS5035 36
port Infiniband edge switches to the demonstration participants and their
MTS3610, a 324 20 and 40 Gb Infiniband switch with 51.8 Tbs/s switching fabric
with 100-300 nano second latency. The HPC Advisory Council is also releasing a
case study on the Juelich Research on Petaflop Architectures (JUROPA) in Julich
Germany, a 300 teraflop/s HPC consisting of 3288 compute nodes, 76 TB of
memory, and 26,304  cores using Sun blade
servers, Intel Nehalem processors, and cluster operation software b ParTec.
Mellanox provided the 40Gb/s Infiniband networking which achieves a 92%
computing efficiency, compared to 50% for Ethernet, 70% for 10Gb, and 80% for
20Gb.