Data centers

09:39 AM
Connect Directly
RSS
E-Mail
50%
50%

Mellanox Scalable HPC Solutions With NVIDIA GPUDirect Technology Enhance GPU-Based HPC Performance And Efficiency

Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end connectivity solutions for data center servers and storage systems, today announced the immediate availability of NVIDIA GPUDirect technology with Mellanox ConnectX, 2 40Gb/s InfiniBand adapters that boosts GPU-based cluster efficiency and increases performance by an order of magnitude over today's fastest high-performance computing clusters.

SUNNYVALE, Calif. & YOKNEAM, Israel.  Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end connectivity solutions for data center servers and storage systems, today announced the immediate availability of NVIDIA GPUDirect technology with Mellanox ConnectX, 2 40Gb/s InfiniBand adapters that boosts GPU-based cluster efficiency and increases performance by an order of magnitude over today's fastest high-performance computing clusters.

Today's current architecture requires the CPU to handle memory copied between the GPU and the InfiniBand network. Mellanox was the lead partner in the development of NVIDIA GPUDirect, a technology that reduces the involvement of the CPU, reducing latency for GPU-to-InfiniBand communication by up to 30 percent. This communication time speedup can potentially add up to a gain of over 40 percent in application productivity when a large number of jobs are run on a server cluster. NVIDIA GPUDirect technology with Mellanox scalable HPC solutions is in use today in multiple HPC centers around the world, providing leading engineering and scientific application performance acceleration.

"As the popularity of GPU-based computing continues to increase, the importance of NVIDIA GPUDirect together with Mellanox's offloading-based InfiniBand technology is critical to our world-leading HPC systems," said Dr. HUO Zhigang, The National Research Center for Intelligent Computing Systems (NCIC). "We have implemented NVIDIA GPUDirect technology with Mellanox ConnectX-2 InfiniBand adapters and Tesla GPUs and have seen the immediate performance advantages that it brings to our high-performance applications. Mellanox offloading technology is an essential component in this overall solution as it brings out the real capability to avoid the CPU for the GPU-to-CPU communications."

"The rapid increase in the performance of GPUs has made them a compelling platform for computationally-demanding tasks in a wide variety of application domains," said Michael Kagan, CTO at Mellanox Technologies. "To ensure high levels of performance, efficiency and scalability, data communication must be performed as fast as possible, and without creating extra load on the CPUs. NVIDIA GPUDirect technology enables NVIDIA GPUs, coupled with Mellanox ConnectX-2 40Gb/s InfiniBand adapters, to communicate faster, increasing overall system performance and efficiency."

GPU-based clusters are being used to perform compute-intensive tasks, like finite element computations, computational fluids dynamics, Monte-Carlo simulations, etc. Supercomputing centers are beginning to deploy GPUs in order to achieve new levels of performance. Since GPUs provide high core count and floating-point operation capabilities, high-speed InfiniBand networking is required to connect between the platforms in order to provide high throughput and the lowest latency for GPU-to-GPU communications. Mellanox ConnectX-2 adapters are the world's only InfiniBand solutions that provide full offloading capabilities that are critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters. Combined with the availability of NVIDIA GPU Direct and CORE-Direct technologies, Mellanox InfiniBand solutions are driving HPC to new performance levels.

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Hot Topics
13
Fall IT Events: On The Road Again With 10 Top Picks
James M. Connolly, Editor in Chief, The Enterprise Cloud Site,  7/29/2014
7
Have You Hugged Your Sysadmin Today?
Susan Fogarty, Editor in Chief,  7/25/2014
5
25 GbE: A Big Deal That Will Arrive
Greg Ferro, Network Architect & Blogger,  7/29/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Slideshows
Twitter Feed