Upgrading Critical Storage

When it's time to replace your critical storage system, it's not just a matter of plug-and-play. We give you an insider's view on NWC Inc.'s storage project and share some

January 28, 2005

9 Min Read
Network Computing logo

Choose Your New Toy

First, select your new storage system. Luckily, there are plenty of options that vary in price and quality, so you can pick the system that works best for you. We chose Adaptec's Snap Server 18000 storage server for its price ($15,000 from Dell, slightly less elsewhere), design and iSCSI support.

We were looking for an affordable solution that was not rate-limited by a PC-based design. The Snap box's backplane provides more bandwidth than a regular PC appliance, though like most systems, it still uses some PC components. The backplane uses Fibre Channel, so we got high throughput for a good price. There are cheaper, PC-based NAS products out there, but we needed more speed. And while we don't need iSCSI support today, we wanted to be prepared to migrate to it as our e-mail and databases start to consume the available disk space on our servers. (For more details on the Snap Server 18000, see ID# 1523sp2.)

After you choose a product, you need a plan of attack. That means knowing how much storage space your new system will provide, so you can partition your disks and replicate the system you're replacing and, ideally, offer more space on each volume.Beware that modifying the configuration and even your applications' source code to use a different disk or CIFS (Common Internet File System) share can cost you plenty in productivity. Try to mirror the configuration of your original storage system. We created one partition on the new Snap server and named and designed it exactly the same as our original NSS NAS, but with more space. We also configured a second volume as iSCSI to facilitate moving portions of NWC Inc. to the NAS that requires block storage.

Dude, Where's My Space?

One thing we forgot to consider: To keep snapshots of the volumes on the new storage box, you'll need 10 percent to 25 percent of the available storage space. Although losing 15 percent of our available space wasn't a problem, it would have gotten ugly if we had sized the drives exactly the same as the disks on the NSS because we'd have come up short on snapshot space. To remedy the situation, we could reinitialize the entire Snap and redo the configuration or turn off snapshotting of SNAP volumes. Fortunately, we had massively oversized each partition to make room for future growth, so it'll be a while before we have to worry about that missing space.

Our plan of attack was simplistic, but it can work for most storage server installations. We configured things so we could drop the NSS and reassign its IP address and host name to the new Snap server before it was time to flip the switch over to the Snap.

Synch UpThe last step in configuring your new storage system is replication and synchronization. Copying gigabytes of data between the two NAS systems wasn't something we wanted to do, particularly since some of the data files change daily. We had hoped to use Computer Associates' BrightStor storage management software (which we had in the lab) to accomplish the replication and synchronization process. But we realized we owned only a trial license for BrightStor. So under a tight time line of just over a week, we went looking for a new replication and synchronization solution.

We first considered Snap's replication software for the Snap 18000, which the company was showcasing at Storage Networking World. The tool was a fit, but Snap couldn't deliver it within our time frame, so we went with XOsoft's WANSyncHA, which handles replication and synchronization in a pretty straightforward manner. The WANSynchHA interface took about two minutes to set up replication of all volumes from the NSS to the Snap Server.

Be sure you have a tool available for copying data over to your new storage system, and keep it up to date throughout testing. It's easier than setting up bulk copies that must be run regularly.

Some days it seems you're just not meant to touch a critical project. It was one of those days when we installed WANSyncHA: The software began returning permission errors on the Snap 18000 drive as soon as we got it running.

The user interface was running as the logged-in user, so we created folders on the drive through Windows Explorer (we had tested this when we configured the Snap initially, but better safe than sorry). Next, we looked up the error message in the documentation. Although XOsoft documents about 20 error messages that WANSynch returns, ours wasn't among them.We were aware that this product performs 24/7 synchronization of files, and that applications nearly always use a Windows Service or Linux Daemon to ensure reliability of 24/7 applications through system reboots and downtime. But we didn't bother looking into whether there were services running to support WANSynchHA. Turns out there were, and they were logged in as "localSystem."

XOsoft's technical support gently reminded us that services (by default) run as the "localSystem" user. Our Snap 18000 was configured to get authentication information from Active Directory, so once we stopped the replication and synchronization service, gave it a valid Active Directory Services user ID and restarted it, our error messages subsided and replication was under way.

The lesson here is to consider the overall system, not just one piece. We were so focused on the storage upgrade that we missed a basic Windows management issue. Don't fall into that trap.

Still to go: The tape backup system needed some tweaks so it would work with the new storage server. And as long as we were on an upgrade rampage, we decided we would fix that, too.

While WANSynch was replicating the NSS to the Snap, we made our backup changes and double-checked everything. With backups configured to handle the new Snap and tapes set up for the increased disk capacity, we were set. You'll definitely need more or larger tapes, and your backup schedule might change because of the increased storage size, so don't forget to check backups.The final step was to coordinate the actual switchover with NWC Inc.'s CIO Lori MacVittie. Since downtime means you, the reader, cannot get to the NWC Inc. Web site (inc. gb. nwc. com/ nwc/ index. jsp? name= home.jsp), we had to be sure MacVittie had time to inform interested parties within NWC. She chose 10 a.m. Central time Friday morning to fire up the new storage system, since our readers are either on break or too busy during that period to visit NWC Inc. Make certain you coordinate the changeover with people in your organization who'll be affected by the downtime--from businesspeople to systems administrators--even if you expect it to be short, like ours was.

When You're Hot, You're Hot

We flipped on the switch with a bit of trepidation: What if everything came down hard and then our NSS, which is rarely shut down, didn't come back up? But all went well. Renaming the Snap and updating the IP address took five minutes, which was mostly reboot time for the Snap. When the server came back up, all connections were restored.

Because the Snap server is smaller, we knew it would use less power, but we were pleasantly surprised with just how little it consumes. The American Power Conversion (APC) rack that contains our storage and most of NWC Inc. has a built-in UPS that had been running at the edge of full for nearly a year--92 percent sustained. But after we shut down the NSS NAS and turned on Snap, our power consumption dropped to 74 percent. That may help with temperature control, too.

The old NSS drive arrays generated a lot of heat, and with the number of machines we run in our lab nearly overpowering our HVAC, on extremely hot days we have to shut down some noncritical machines to cool things off. The new storage server takes up 5U of rack space versus the 27U consumed by the NSS, so it makes sense that the newer system would run cooler.If you plan to replace your critical storage system, map out your strategy, find the right tools for the job, and if you can, use replication to move your data. There's always a chance you'll miss something: Our new machine, for example, uses ADS, but the old one did not. So a couple of machines that were not in the domain failed when reconnecting and needed their drives remapped--not a huge deal, but something we didn't plan for. You may run into roadblocks like this, but let's hope it's nothing that ruins your day or drags out your implementation.

Don MacVittie is a technology editor at Network Computing. Previously he worked at WPS Resources as an application engineer. Write to him at dmacvittie@ nwc.com.

How to Swap Out Your Storage

1. Shop around for storage that meets your needs. Add at least 20 percent to your disk space requirements to allow for growth.

2. Determine changes in applications that access storage or decide on in-place replacement. Look at each application that utilizes the space, from business applications to backup.3. et up the new storage. Make certain to configure extra space on volumes that were running out of free space in the old system.

4. Replicate or copy to the new storage. Use a replication and synchronization system if you have more than a few gigabytes or the data changes regularly.

5. Ensure that backups are configured to handle your increased disk capacity. At a minimum, make sure you have enough tapes and that any added volumes are configured in the backup system.

6. Test the new storage after hours, possibly even failing over to it. The last thing you want to do is flip the switch without previous testing and end up with no storage.

7. Plan your final upgrade date with users in mind. Check with as many people as possible. Don't risk having a single business manager or system administrator say, "I didn't know, and it cost me money and productivity."8. Flip the switch, then be prepared to fix all the little problems that crop up.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights