Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

Clusters vs Supercomputers

Resch believes that one of the best things about clustering is cost. ”The good thing about the cluster is that with a small amount of money, small research groups can get a reasonable amount of performance."

Sharan Kalwani, a high-performance computing specialist at General Motors Corp., agrees that clusters are not ideal for every type of application. “Clusters work only for a certain class of problem,” he says. “The I/O bandwidth is not there.”

Kalwani, who has used both clusters and supercomputers at GM, tells NDCF that clusters are more appropriate for highly compute-intensive applications that need little I/O. “Always use the right tool for the right job,” he notes.

For its part, GM has taken the supercomputer route for its crash testing and design, unveiling a new IBM Corp. (NYSE: IBM) system last year. This helped push the firm’s supercomputer capacity up from 4,982 gigaflops, or billions of operations per second, to over 11,000 gigaflops, according to Kalwani (see GM Buys Major IBM Supercomputer and IBM Speeds GM Crash Tests).

Clearly, time is money in the automobile industry. With the new supercomputer, GM can get its cars to market within 18 months, Kalwani told attendees at Oak Ridge. This is a stark contrast to nine years ago, when it took a full 48 months to design and launch a car, and Kalwani says GM is looking to push this envelope even further. “I have just been handed my next assignment,” he says. “It’s a year!”

Page: « Previous Page | 1 2 | 3  | Next Page »

Related Reading

More Insights

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers