• 07/03/2008
    9:00 PM
  • Network Computing
  • Commentary
  • Connect Directly
  • Rating: 
    0 votes
    Vote up!
    Vote down!

Putting the SAN in San Diego

We take a peek at SoCal's storage testing ground
If you think your firms data growth is overwhelming, then spare a thought for the San Diego Supercomputer Center (SDSC), which uses one of the largest academic SANs to support the Teragrid research network, comprising organizations such as the Oak Ridge National Lab and the Pittsburgh Supercomputing Center.

”You may have the network go offline and that’s annoying, but if we lose someone’s data, they come to your door with a torches and a rope,” says Bryan Banister, SDSC storage manager, explaining that he has to store data for around 100 active Teragrid projects .

The exec recently spoke to Byte and Switch about the challenges involved in supporting a 28-Pbyte SAN, describing his plans for 8-Gbit/s Fibre Channel, SSDs, storage virtualization, and state-of-the-art file systems.

”One of the major projects that we have been working on over the last couple of years has been deploying a really large Wide Area Network (WAN) file system for the Teragrid project,” he says. “Using the IBM GPFS file system technology we’re able to provide over 600 Tbytes worth of file system space for scientific projects to store their data results and other important scientific data collections.”

The exec explained that SDSC is looking to expand the WAN this year with Sun’s Luster Parallel File system, highlighting just some of the cutting edge work that is currently underway in Southern California.

Log in or Register to post comments