Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Mellanox Scalable HPC Solutions With NVIDIA GPUDirect Technology Enhance GPU-Based HPC Performance And Efficiency

SUNNYVALE, Calif. & YOKNEAM, Israel.  Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end connectivity solutions for data center servers and storage systems, today announced the immediate availability of NVIDIA GPUDirect technology with Mellanox ConnectX, 2 40Gb/s InfiniBand adapters that boosts GPU-based cluster efficiency and increases performance by an order of magnitude over today's fastest high-performance computing clusters.

Today's current architecture requires the CPU to handle memory copied between the GPU and the InfiniBand network. Mellanox was the lead partner in the development of NVIDIA GPUDirect, a technology that reduces the involvement of the CPU, reducing latency for GPU-to-InfiniBand communication by up to 30 percent. This communication time speedup can potentially add up to a gain of over 40 percent in application productivity when a large number of jobs are run on a server cluster. NVIDIA GPUDirect technology with Mellanox scalable HPC solutions is in use today in multiple HPC centers around the world, providing leading engineering and scientific application performance acceleration.

"As the popularity of GPU-based computing continues to increase, the importance of NVIDIA GPUDirect together with Mellanox's offloading-based InfiniBand technology is critical to our world-leading HPC systems," said Dr. HUO Zhigang, The National Research Center for Intelligent Computing Systems (NCIC). "We have implemented NVIDIA GPUDirect technology with Mellanox ConnectX-2 InfiniBand adapters and Tesla GPUs and have seen the immediate performance advantages that it brings to our high-performance applications. Mellanox offloading technology is an essential component in this overall solution as it brings out the real capability to avoid the CPU for the GPU-to-CPU communications."

"The rapid increase in the performance of GPUs has made them a compelling platform for computationally-demanding tasks in a wide variety of application domains," said Michael Kagan, CTO at Mellanox Technologies. "To ensure high levels of performance, efficiency and scalability, data communication must be performed as fast as possible, and without creating extra load on the CPUs. NVIDIA GPUDirect technology enables NVIDIA GPUs, coupled with Mellanox ConnectX-2 40Gb/s InfiniBand adapters, to communicate faster, increasing overall system performance and efficiency."

GPU-based clusters are being used to perform compute-intensive tasks, like finite element computations, computational fluids dynamics, Monte-Carlo simulations, etc. Supercomputing centers are beginning to deploy GPUs in order to achieve new levels of performance. Since GPUs provide high core count and floating-point operation capabilities, high-speed InfiniBand networking is required to connect between the platforms in order to provide high throughput and the lowest latency for GPU-to-GPU communications. Mellanox ConnectX-2 adapters are the world's only InfiniBand solutions that provide full offloading capabilities that are critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters. Combined with the availability of NVIDIA GPU Direct and CORE-Direct technologies, Mellanox InfiniBand solutions are driving HPC to new performance levels.

  • 1