Networking

05:38 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Amazon Launches Supercomputing In The Cloud

The Amazon Elastic Compute Cloud's GPU Cluster Instance, based on NVIDIA Tesla M2050 processors, provides parallel processing for graphics rendering, simulations and other high performance computing tasks for $2.10 an hour.

Slideshow: Amazon's Case For Enterprise Cloud Computing
Slideshow: Amazon's Case For Enterprise Cloud Computing
(click image for larger view and for full slideshow)

Amazon Web Services on Monday launched a new unit of computing to provide a form of supercomputing in its EC2 cloud for the masses. It's a form of parallel processing powered by clusters of graphics processors, the GPU Cluster Instance or graphics processing unit.

In the past, supercomputing has tended to be limited to academic and corporate researchers who had been given access to special-built high performance computing clusters. One of the requirements of the cluster was a high speed interconnect connecting the cluster nodes. The possibility of building such clusters and making them generally available came a step closer to reality with the emergence of 10 Gbs Ethernet switching equipment at affordable prices.

GPUs are designed to run on a cluster of EC2 servers, making use of the EC2 Cluster 10 Gbs network and achieving a level of high performance computing far above the capacity of EC2's simple Standard Instance. A "small" Standard Instance contains 1 Elastic Compute Unit of CPU power. An ECU is the equivalent of a 2007 Intel Xeon or AMD Opteron chip running at 1 to 1.2 GHz. The new GPU Cluster Instance contains 33.5 ECUs, along with 22 GBs of memory, versus 1 ECU and 1.8 GBs of memory with a small Standard Instance.

While the GPU Cluster Instance can be used for graphics rendering, it can also be used for other high performance computing tasks as well. It follows the simpler Cluster Compute Instance that AWS introduced earlier this year. GPU Cluster Instances are designed for customers "who need additional network and CPU performance for their large and complex HPC workloads," said Peter De Santis, general manager of EC2, in announcing the new instance. The GPU Cluster is intended to run jobs that can be subdivided into many smaller parallel processing jobs.

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Slideshows
Cartoon
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Twitter Feed