Think your data infrastructure is getting out of hand? Try telling that to Michelle Butler, who oversees a storage infrastructure that grows one to two Tbytes per day.
Butler is technical program manager of the Storage Enabling Technologies Group at the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Champaign-Urbana, Ill. NCSA, which is probably best known as the place where the graphical Web browser was invented, provides high-performance computing systems for a wide variety of science and engineering programs, everything from earthquake modeling to biochemical research.
NCSA's storage group, which comprises seven full-time staffers including Butler, is in charge of managing a 500-Tbyte (and growing) mass storage system; backup and recovery of that data; all of the SAN production and research; and research into parallel and clustered file systems. NCSA's mass storage system has 2,000 and 3,000 active users at any given time.
Yet even with such huge storage capacity requirements, NCSA had resisted implementing a SAN until March 2002. Butler says that until recently she felt SAN technology wasn't reliable enough. "SAN just wasn't as safe as it needed it to be for our production environment," she says. "We really needed high-availability switches to make sure they didn't go down."
The final decision to go to a SAN architecture was sparked by the fact that the NCSA's Windows NT, Unix, and mass storage groups were each getting ready to purchase vast amounts of new disk storage at the same time. Click! The light bulb flicked on. "With the SAN, we wanted to bring a large amount of disk in here so that multiple systems could access it," Butler says. After some labs testing, Butler felt assured that Fibre Channel infrastructure was reliable enough to run NCSA's storage on.