Convergent Roads Diverge

Three IT professionals share methods of data center and storage consolidation

September 14, 2006

5 Min Read
Network Computing logo

BURLINGAME, Calif. -- StoragePlus -- Successful data center and storage consolidation require companies to take a hard look at their environment and employ a flexible approach, according to users on a CTO panel here at StoragePlus.

For Michael Schaffer, CTO of online book and music seller Alibris, the challenge was to start with a shoestring budget and then upgrade to a full SAN as the business grew.

Working with partners such as Amazon,

Barnes & Noble, and Borders, Alibris offers more than 60 million titles and sells more than 10,000 books a day. With that many titles and thousands of vendors, Schaffer says changes in price and inventory could bring up to 10 million records that need updating daily.

"Our business is transaction intensive and data heavy," he says. "We chew up our storage and CPU." Schaffer says Emeryville, Calif.-based Alibris couldn't afford a SAN at first so it spread its databases over multiple servers.

"That worked well for cost control, but it didn't work well for keeping our storage unified," he says. His large data sets and intensive applications strained the I/O of his SQL Servers, with never enough spindles or disk space to handle it all in the Sacramento data center.Alibris was acquired by private equity firm Oak Hill in May, and Oak Hill has aggressive growth plans. That enabled Schaffer to put out an RFP for a SAN that he ultimately purchased from 3PAR: a dual-controller InServe 800 with 144 Fibre Channel drives and 15 FATA drives. He uses the Fibre Channel drives for the transactional data, and the nearline FATA for development and testing.

"We have one storage system to manage now," he says. "We can update everything at once instead of asking, 'Which of all our spattered bits of storage that I have need upgrading? And when do I upgrade it?' "

When he bought the 3PAR system, Schaffer was projecting that inventory would double by July. As it turned out, it tripled by August. "And if we acquire more companies, it's very likely demand on our servers will go through the roof," he says.

To keep up with demand, he pushed six SQL Server clusters into production over the course of three weeks to consolidate all his database storage into a high availability unit. "Every weekend we were pushing two or three servers over," he says. "Just like that, we're no longer I/O bound."

John Greiner, CTO of Legal Services of New York (LSNY), needed a more efficient way of managing data across 16 offices throughout New York. LSNY is a federation of non-profit law firms with a $36 million operating budget. His mission was to consolidate and standardize email, databases, backups, and security applications for all the offices. His budget was negligible."Every dollar we spend in IT is a dollar we don't spend on our services," he says.

LSNY's biggest storage problem was it had a wide variety of technology, a lot of duplication, and data was tied to a specific office. After the 9/11 attacks caused two of its offices to close, Greiner realized he needed disaster recovery as well.

Greiner's answer was to set up dual data centers to make everything redundant. LSNY uses Double-Take software to replicate critical servers between the data centers over T1 lines, and has Dell AX150i IP SANs in its data centers.

Next Page

Office documents are stored on local servers at the branches, and those files are replicated to a central SAN and incrementally archived to tape at the other data center. Email, databases, and Internet access is centralized. "That's made it easier to access data remotely and more securely," he says.Now his staff can access data at any office, and finds it easier to collaborate across offices and organizations. Systems crash a lot less frequently, and it's easier to add storage capacity.

Greiner estimates he saves $300,000 per year on equipment and service quality is better since he deployed replication. With the average cost for legal work of $110 per hour, he also estimates a $350,000 saving from reduced downtime. LSNY also reduced costs for tech staff at local offices by $70,000 per year.

But he says the real benefit is his team can do more work now. "A Merrill Lynch server goes down, it costs them millions of dollars an hour in downtime," Greiner says. "We don't lose that much, but it affects the lives of our clients."

Alan Stellpflug, principal systems engineer at biotech firm Genentech, last year was asked to design networked storage for one of his company's research groups. The previous environment had multiple file systems, but they weren't shared, and there were many copies of the same data throughout the groups. Restoring backed up data often took weeks, eating into production systems and shortening the work day.

Stellpflug says his goal was to set up a storage system where data could be accessed from anywhere and under a common namespace. He wanted a redundant system with snapshots, disk backup, and tiered storage.He installed two BlueArc Titan NAS systems -- a dual-head Titan 2200 with 84 Tbytes of primary storage on Fibre Channel drives and a single-head Titan with 168 Tbytes of SATA disk at a replication site.

"We used the replica for quick restores and what I call out-of-band backups," Stellpflug says. "We snap a chunk of data to SATA and back up that data while researchers can continue to work on the live file system."

Stellpflug says the biggest challenge came from researchers who use a lot of different applications and may need to access the same data. "If you have a single server doing one application, it doesn't matter what kind of storage you have," he says. "When you start increasing the number of servers and applications that access the same data over and over, then you really need to increase IOPS."

Genentech does hourly, daily, and weekly snaps, keeping 12 hourly snaps online "because our research group is OK with 12 hours of lost data. We still do tape backups for compliance. If the FDA says 'We want to see data from two years ago, we have it on tape.' "

His next project is to expand his SATA capacity and migrate data between tiers. "We want to be able to move data back and forth between Fibre Channel and SATA, and migrate from the data center to a remote site," he says.Dave Raffo, News Editor, Byte and Switch

  • 3PAR Inc.

  • BlueArc Corp.

  • Dell Inc. (Nasdaq: DELL)

  • Double-Take Software Inc.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights