LSU Raises Storage Bar

High performance computing initiatives bring rapid storage growth to university

December 17, 2005

4 Min Read
Network Computing logo

While people were evacuating Louisiana after hurricane Katrina hit in August, Louisiana State University's high-performance computing (HPC) staff was reporting for duty to support emergency services.

The Baton Rouge campus became the center of emergency operations for the Department of Homeland Security's Federal Emergency Management Agency (FEMA), the Red Cross, and other relief groups in the immediate aftermath of the storm.

LSU was officially shut down, but that didn’t pertain to my staff,” LSU’s director of HPC Brian Ropers-Huilman says. “We had to keep our machines running. Our facilities have our own generator and backup chiller, so as long as we could keep diesel coming into the campus, we were up and running. We were the last person standing.”

But power wasn’t the only thing FEMA required. It needed networked storage, too.

“We had to provide them networking services, storage services, and computational services,” Ropers-Huilman says. “One of first things FEMA wanted was storage. They wanted to archive locally all aerial photography taken of the affected coast. They were receiving mapping data and needed to marry it up with various other data. I did not have the capacity that FEMA wanted, so I turned to some of the storage providers that I had already been using for a quick quote.”LSU had bought 32 Tbytes of Panasas ActiveScale Storage Cluster blades a few months before to handle hurricane modeling and other research, and he quickly bought 20 Tbytes more after the hurricane hit. (See Panasas Clusters at LSU.)

“We wanted something we could immediately connect,” he says. “We were using Panasas with Linux clusters, but the FEMA people were sitting at Windows machines and wanted to map a drive, and Panasas could do that on CIFS.”

The successful storage upgrade followed a series of positive moves by LSU. The university had been beefing up its computing center, beginning with the 2002 purchase of a large-scale Linux cluster supercomputer known as Super Mike. Super Mike was a 2.2-teraflop (trillions of calculations per second) machine when installed and was upgraded to 4.1 teraflops in 2004.

Also in 2004, the state of Louisiana committed $40 million to LSU for an initiative known as the Louisiana Optical Network Initiative (LONI). LSU is also linked to the National LamdaRail, a high-speed network set up by research universities and technology companies also known as Internet3.

Ropers-Huilman says he made storage a priority to keep up with the growth. “We view ourselves as an up-and-coming national high-computing center,” Ropers-Huilman says. “And on the storage side, we were growing quickly.”Ropers-Huilman’s group was running an SGI Prism Extreme with 32 processors and six graphics pipes, two IBM Enterprise Storage Servers, and several Apple XServe RAID SANs. (See SGI Touts Customer Wins.) But they lacked a system optimized for clusters to run an internally developed applications framework called Cactus. Cactus, used for storm modeling, is developed specifically for parallel HPC, and Ropers-Huilman wanted storage built for high-performance Linux clusters.

Last spring, he evaluated products from Panasas, BlueArc, and Terrascale Technologies. (See Panasas Intros Cluster , BlueArc Titan to Battle Giants, and Terrascale Plans I/O Onslaught.) All three brought products on site for tests running Cactus as well as generic benchmarking.

Ropers-Huilman says Panasas was the top performer, easier to manage than TerraScale, and easier and cheaper to upgrade than BlueArc. He also liked that it was hardware and software, while BlueArc is a hardware system and TerraScale was software only. He won’t disclose the price, but says Panasas was 20 percent cheaper than runner-up BlueArc.

Why not evaluate enterprise NAS market leaders EMC and NetApp? “I wanted a parallel file system,” Ropers-Huilman says. “I wasn’t going to give BlueArc a glance either, but BlueArc’s performance claims surprised me. And they fulfilled them, but not as well as Panasas.”

The parallel file-system technology was important even before FEMA set up shop after Katrina. LSU was already using Panasas to process information for models released by the National Hurricane Center to project a hurricane’s track. Ropers-Huilman says he was able to cut the time to process new information released from the National Hurricane Center from between eight to 10 hours to between four to six hours.“That’s a magical number,” he says, “because the National Hurricane Center releases new tracks to the hurricane center on campus every six hours when a hurricane is bearing down. We can do a model of what the hurricane is going to look like before the National Hurricane Center releases its next update.”

It doesn't seem like much went right for FEMA in the Katrina aftermath, but it did catch a break with LSU’s technology.

— Dave Raffo, Senior Editor, Byte and Switch

Organizations mentioned in this article:

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights