Building a Fibre Channel SAN

If your small organization needs a storage area network, consider Fibre Channel.

September 24, 2004

8 Min Read
NetworkComputing logo in a gray background | NetworkComputing

We have roughly 800 nodes and a data infrastructure that evolved through years of organic growth. The school has a mix of 26 servers, all using DAS (direct-attached storage) and running six operating systems.

To simplify the day-to-day administration and improve the reliability of these systems, we decided to move our critical systems off DAS and onto a SAN. We sent out an RFP that called for putting five existing servers running user storage, e-mail, database and intranet applications onto a SAN based on a 2-Gbps Fibre Channel switch. We specified a Fibre Channel storage device with more than 3 GB of storage, basic management software, HBAs (host bus adapters) and cables. The SAN had to be expandable and support Windows, Linux and Mac servers. The school's existing Dantz Development Retrospect backup scheme with AIT changers was recycled for the new SAN to save some money in the short term.

We weren't surprised, though, when the first round of quotes came back in the $90,000 range. Even an EMC solution bid of $69,000 was too high for our school budget.

Because maximum performance wasn't the top priority, we looked at various flavors of ATA and SCSI drives riding a Fibre Channel backplane. After studying solutions from Adaptec, nStor Corp. and others, we settled on Apple's new Xserve RAID. It doesn't quite offer enterprise-level Fibre Channel performance, but the form factor of 14 hot-swap ATA bays, 3.5 TB with existing drive technology, redundant components and simple management tools--as well as the price--couldn't be beat. Xserve also guaranteed compatibility with our Mac servers, which wasn't easy to find in the SAN world.

Bargain Basement SAN

Click to Enlarge

To keep things simple and minimize compatibility problems, we bought the entire solution from Apple: Xserve RAID with four 250-GB drives, a 20-port Vixel Fibre Channel switch and five Apple HBAs, which are rebranded LSI Logic Fibre Channel cards, though Apple doesn't advertise that fact. We should mention, too, that Apple didn't provide good presales support, so we had to design the end solution ourselves with help from peers and message boards.

Because all the components reside in one rack, the 6-foot fiber-over-copper cables that came with the HBAs met our humble cabling requirements. If we had needed more distance between boxes or had a long run to a backup server, we would have had to use glass. That would have increased our costs: A 25-meter fiber cable can set you back $300.

The final tally was less than $14,000 with educational discounts (see "Bargain-Basement SAN,", left). When we eventually populate the full 14-drive array at $500 per 250-GB drive over the next two years, we'll have our 3.5-TB SAN for less than $20,000. Bringing more servers onto the SAN is a nearly painless expense; all it takes is additional HBAs. And if we ever need more than 20 ports, the Vixel can uplink to a second or third switch.

We ordered our Xserve with four 250-GB drives. Mounted as a single RAID 5 array, the drives yield roughly 700 GB of usable space. We decided to slice the 700 GB into five logical partitions, each mapped to a specific server. The Xserve has two controllers, each chaperoning seven drives, so we elected to run our four initial drives on one side of the box, keeping the second controller unoccupied as a backup.

All devices on the SAN were to be interconnected via the Fibre Channel switch, with everything running at 2 Gbps. Our switch would be "wide open" without any policies or restrictions controlling flow. Shops with more complex environments, however, or in a constant state of change, should use policy-based management. Then you can restrict port-to-port communications and restrict access to sensitive data.We preassigned IP addresses for the new equipment--IP is used for Fibre Channel device config and management, not for data transport--and configured a VLAN to restrict access for SAN-control functions.

Then it was time to roll up our sleeves and implement the SAN. We budgeted two days over a weekend this summer to get everything up and running. Less than an hour out of the box, the Fibre Channel switch and Xserve were racked and humming. We powered down and installed the HBAs into our five servers, taking care to identify the WWID (World Wide ID number) associated with each HBA. A WWID is Fibre Channel's "unique identifier" and analogous to an Ethernet NIC's MAC address. Windows 2003 and OS X recognized the Apple HBAs, while Linux required a driver install (pulled from LSI's site) to use the card.

Then we connected the Fibre Channel SAN switch, Xserve and OS X management server to an Ethernet switch using the SAN VLAN. All our 6-foot Fibre Channel cables were still in plastic bags, and much to our relief, we saw many happy status lights.

Loading Apple's Java-based RAID-management utility onto our OS X Xserve was a breeze. The app autodiscovered the Xserve, which had booted up and received an initial IP address via DHCP. We assigned the Xserve's static IP and updated admin passwords. We also found that the management app runs just fine on Windows 2003 and Linux. The cross-platform practicality was impressive, though my Mac sys admin got the heebie-jeebies seeing Apple software running on Wintel.

We got into the FC switch via a serial port. With IP address and admin passwords set, we verified the default-configuration settings for autodetecting port speeds. Depending on your level of institutional paranoia, switch settings can be used to limit which ports can talk to each other. We left all ports accessible and didn't put any restrictions in place.After configuring the FC switch, we returned to the Xserve admin utility, pulled out our logical storage plan and fired up the array-management tool.

The Xserve ships with all drives in a RAID 1, but our design called for RAID 5. We used Apple's graphical utility to quickly break up the existing array and implement our setup. Creating the array was straightforward, but it went very slowly. Apple says it can take several hours, which ended up being a gross understatement.

Twenty-three hours after initializing, our shiny new RAID 5 array was finally ready. Using the admin tool, we sliced the single array into five 120-GB LUNs (logical unit numbers). We then used the LUN masking tool to map each slice to a specific server: This required the WWID number of the HBAs. With all systems go, we broke out the Fibre Channel cables and rebooted our servers one last time.

Thanks to LUN masking, each server came up and found its respective slice of the array as if it were a local 120-GB drive. Without masking, HBAs can discover all available SAN LUNs, and two or more servers could write to the same data--not a good thing for our setup. LUN masking lets us run our Fibre Channel switch with no restrictions.

The native disk-management tool in each OS let us format and configure the drives as if they were DAS. We brought each box offline and moved data from DAS to SAN, and then mapped the apps to the new locations. We copied a 12-GB database to its new home on the SAN, for example, ran a thorough integrity check and then pointed the database app to the new data location. After verifying all our data sets, we had a working SAN that was transparent to users, servers and applications.The trade-off of going with low-end management tools rather than expensive, high-end SAN management software, however, was that we didn't get enhanced capabilities like dynamic provisioning, automatic migration of aged data, user or group quotas, or shared access to single data sets. But our needs aren't that complex, and the ROI on a pricey SAN package is a tough sell for us. (Your mileage may vary, however, if you have a dynamic environment or spend a passel of hours maintaining your disk farm.)

Reality Check

To make room for growth, we used some advanced feature sets in the OS. In Windows 2003, we set up our logical SAN drives as dynamic disks, which let the OS merge additional LUNs with existing drives to expand storage without rebooting. While testing, we added two 170-GB drives and set them up as a 170-GB RAID 1 array divided into three LUNs. Windows 2003's storage management let us format the first LUN as a dynamic NTFS drive and then add the second LUN "live," increasing capacity in real time without restarting the server. We can continue adding physical drives to the Xserve and grow our storage pools using dynamic disks.

So far, we're seeing overall improvements in data-access speeds and app performance. But we'll see how real-world times fare when the students return in the fall and beat the heck out of their storage and mail accounts hosted on the SAN.

We've ordered a fifth drive to mount as a hot spare--RAID 5 provides a solid platform for our data, and the hot spare gives us extra security.In the future, we can consolidate our servers even more by adding a second Xserve to host diskless servers and provide backup-to-disk functionality. We used a spare diskless Xeon box to test booting from a LUN on the SAN and installed Windows 2003 from CD to the SAN in eight minutes, versus the 35-minute install it takes on a single DAS drive. Disk-to-disk backup will give us a live copy of all our high-priority data, and tying the second Xserve back to a different building on campus provides additional insurance.

Of course, the SAN you build--big name or homegrown, iSCSI or Fibre Channel--will depend on your requirements, in-house skills and budget, so keep an open mind and review the technologies carefully. You might be pleasantly surprised by your options.

Joe Hernick, PMP, MS, is director of IT at the Loomis Chaffee School. Formerly a director at a Fortune 100 firm, he has 12 years of consulting and project-management experience in data and telecom. Write to him at [email protected].

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights