The World's Biggest SANs

Who has the world's biggest storage network? We've unearthed a few candidates

September 20, 2007

7 Min Read
Network Computing logo

We at Byte and Switch are on the trail of the world's biggest SAN, and this article reveals our initial findings.

There are several reasons we've embarked on the search for gargantuan storage networks. For one, large SANs push the envelope. If I'm looking to expand a network of 5 Tbytes, what better way than to study the fate of those who've gone above the petabyte level? Big SAN stories furnish a glimpse into the outer limits of scaleability.

Really big storage networks also are highly visible and get lots of feedback and testing. Often, this yields information that is useful in similar situations -- albeit on a smaller scale.

Those are just a few of the reasons we're seeking big SANs. The most compelling driver is that, like everyone, we enjoy a great story. And what better tale to tell than one of new frontiers in our chosen field of coverage?

Again, this list is meant to be a starting place for further endeavor. If you've got a big SAN story to tell, we'd love to hear it. Hit that message board, call us, or send us a message.So, without further ado, we present five of the world's biggest SANs:

Next Page: JPMorganChase

Storage snapshot: The financial firm has more than 14 Petabytes of active storage and plans to add "several more Pbytes" within the next 12 months.

Key suppliers: IBM, Sun

Background: As one of the world's largest financial organizations, JPMorganChase is currently building out a SAN capable of supporting its 170,000 employees and a range of services from investment banking to asset management and commercial banking.JPMorgan sent shockwaves through the banking industry three years ago when it ended a massive, $5 billion, seven-year outsourcing deal with IBM, encompassing storage, servers, help desks, and networking. Citing the ability to make its systems more streamlined and efficient, JPMorgan swallowed up around 4,000 IBM employees and contractors as part of the "insourcing" project, although IBM gear remained a key part of the firm's data centers.

Since then, JPMorganChase has kept precise details of its storage infrastructure under wraps, although there have been some hints about the firm's long-term strategy. In early 2005, for example, JPMorgan entered into an alliance with Sun to use that vendor's Solaris 10 operating system as the basis for a number of projects, including data archiving, virtual data centers, and grid computing.

The New York-based firm had already been using Sun kit as part of a pilot project to archive trading data and, two years ago, deployed InfiniBand as its technology backbone, linking 10 data centers on three continents.

Next Page: U.S. Department of Defense

Storage snapshot: 17,000 to 20,000 Fibre Channel switch ports serving 3 million employees in 163 countries; 700 Fibre Channel switches and just under 60 directorsKey suppliers: Brocade, others

Background: Last year the Department of Defense (DoD) teamed up with Brocade and Denver-based consultancy reVision to build the "Meta SAN," which is already being touted as one of the world's largest storage area networks.

With more than 3 million employees spread across 163 countries, and a budget of $419.3 billion, the sheer scale of the DoD dwarfs virtually all of America's largest firms. The agency is now building a SAN to match.

Described by Brocade as "one of the largest consolidated enterprise storage networks in the world," the DoD deal calls for that vendor to supply a total of 17,000 Fibre Channel switch ports, although reVision's general manager Shawn Landry tells Byte and Switch that this figure is now closer to 20,000. "We have been scaling pretty well," he says. "Its probably over 700 switches in total and just under 60 directors."

Specifically, the DoD is using SilkWorm 48000 Directors to eventually link storage at more than 100 different "areas of interest," such as different service agencies, according to Landry. Fibre Channel over IP (FCIP) routing is provided by SilkWorm FR4-18i Director Blades.A key element of Meta SAN is its ability to tie together disparate islands of storage right across the DoD. "As for the storage target, we have got many, many different storage vendors," says Landry, noting that building a SAN on this scale is largely unprecedented.

Next Page: NASA

Storage snapshot: SAN infrastructure includes 1.1 Pbytes of disk and 10 Pbytes of tape storage.

Key suppliers: SGI and Sun/StorageTek

Background: NASA is another U.S. government organization with booming storage needs, and the space agency has deployed over 1 Pbyte of disk and 10 Pbytes of tape storage as part of its SAN."[The SAN] provides support to NASA as a whole, supporting four mission directorates, as well as engineering and safety," explains Bill Thigpen, engineering branch chief of NASA's Advanced Supercomputing Division. "It's everything from support for next-generation staff that will go to the Moon and Mars to design for engines and the current space shuttle."

On the disk side, the organization started building its SAN back in 2004, deploying a 440-Tbyte InfiniteStorage solution from SGI to support its "Columbia" supercomputer. But Thigpen and his team soon realized that this would not cope with NASA's data explosion. "We were filling up everything, and we couldn't get it to tape fast enough," he says, explaining that this prompted NASA to buy the additional 660 Tbytes of storage from SGI.

The exec is already planning another InfiniteStorage upgrade. "There's about another quarter of a petabyte" on the way.

For the most part, NASA is an SGI storage shop, and the vendor provides Fibre Channel connections for the SAN as well as its CXFS file system. "We use disk management and tape management products from SGI," says Thigpen, although NASA's tape infrastructure consists of Sun/StorageTek tape libraries.

"We have the [tape] capability for about ten petabytes," says Thigpen, explaining that NASA currently uses only about a quarter of this.Next Page: San Siego Supercomputing Center

Storage snapshot: Central 540-Tbyte SAN augemented with 2 Pbytes of disk and 25 Pbytes of tape storage for academia's largest storage system.

Key suppliers: Sun

Background: The San Diego Supercomputer Center (SDSC) is yet another site relying heavily on SAN technology to support both its own high-performance computing (HPC) work and storage for other research bodies.

Last year the site deployed Sun's Sun Fire 15K server as its main data management server within a 250-Tbyte SAN, which has since been expanded to 540 Tbytes. This is supported by an additional 2 Pbytes of disk and a whopping 25 Pbytes of tape in Sun/StorageTek libraries.The SAN is at the heart of project called Data Central, which aims to store, manage, analyze, and share data on behalf of the academic community. This includes database hosting, long-term storage of scientific data, and data mining, to name just a few examples.

As part of the project, researchers from other organizations can request a "data allocation" from SDSC, which enables access to the Data Central services. More than 400 Tbytes of disk space is reserved for this service, according to the center.

Next Page: Lawrence Livermore National Laboratory

Storage snapshot: SGI SAN holds more than 2 Pbytes on more than 11,000 SATA and Fibre Channel drives.

Key suppliers: SGIBackground: The Lawrence Livermore Laboratory in Livermore, Calif., is home to a hulking SAN, which supports the "ASC Purple" supercomputer.

Installed in 2005 to support nuclear weapons research, ASC Purple was born out of a partnership between the Department of Energy (DoE) and IBM. The system was ranked sixth on the most recent list of the world's Top 500 supercomputers.

"We run simulations for how the physics of nuclear weapons work. Basically, they are large, large, data files," says Bob Meisner, deputy director of advanced simulation and computing at the DoE's National Nuclear Security Administration (NNSA), explaining that this needs a big storage system. "We have to keep these files so that we can compare one simulation to another."

Housed in Lawrence Livermore's 253,000-square-foot Terascale Simulation Center, ASC Purple is one of the fastest supercomputers in the world, capable of 96 Tflops (trillions of calculations per second), and is the sister system of Blue Gene/L, which topped the most recent Top 500 list.

In October 2004, the lab deployed 1.1 Pbytes worth of hardware from SGI for its SAN, although the storage system has since grown to around 2 Pbytes. Around that time, ASC Purple used around 11,000 SATA and Fibre Channel disks, so these figures are likely to have doubled, dwarfing the 806-Tbytes of DataDirect storage used on Blue Gene/L.Although specific storage details of the ASC Purple SAN are hard to come by, the storage system is said to offer 106 Gbytes per second of I/O bandwidth.Have a comment on this story? Please click "Discuss" below. If you'd like to contact Byte and Switch's editors directly, send us a message.

  • Brocade Communications Systems Inc. (Nasdaq: BRCD)

  • DataDirect Networks Inc.

  • IBM Corp. (NYSE: IBM)

  • Lawrence Livermore National Laboratory (LLNL)

  • reVision Inc.

  • SGI

  • Sun Microsystems Inc.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights