City of Pforzheim

German Stadt took the plunge into virtualization to stop wasting storage resources

July 10, 2002

5 Min Read
Network Computing logo

The municipal authority in the German city of Pforzheim -- founded in about 90 A.D., with roughly 100,000 citizens today -- saw its data storage volumes were exploding. So it needed to either crack open the storage unit again and insert more disks, or find another way around the problem.

It's a familiar headache to hundreds of system administrators, but the IT team at this local authority decided that enough was enough. [Ed. note: Or, in German, Genug!] Here's why: Pforzheim was not only managing the authorities' own storage systems, but also the computer center in the new Town Hall that monitors consumer settlement accounting, the regional information system for the city and the Department of Works, book purchasing and lending at the City Library, online theater ticket sales, the PC network in the schools, and the Municipal Authority's Internet portal. All in all, 15 employees look after about 1,700 workstations.

"The management of the data generated between all these systems was getting out of hand," says Bernhard Enderes, manager of data processing at the human resources department of the City of Pforzheim.

The city's 70-Gbyte storage memory had become insufficient; it needed 110 Gbytes. That meant the backup tape library, with a total capacity of roughly 1 terabyte, would also become too small. Furthermore, data backup via the network (LAN) impeded on the backup performance. "We needed to integrate a larger tape library in such a way that the storage and backup procedures could be carried out separately from the LAN traffic," says Enderes.

They decided to build a SAN to decouple the storage from each server and make it available as a central pool of capacity giving them more freedom to divvy up the resources as needed.But as sensible as the move to a SAN might appear from an economic aspect, this did not solve the problem of the capacity bottlenecks. "The redundant design of the booting procedures for our 30 servers alone would take up the space of 60 disks each with 36 gigabytes although only a fraction is required for the operation," explains Andreas Hurst, head of the data processing department at the City of Pforzheim. "The rest of the storage lies unused and cannot be readily allocated to other servers whose pool quota in the SAN is running low." He says requests for increased capacities come in thick and fast from the specialist technical areas in particular, for example, for plans and aerial photos in digital form: "It's not unusual to receive inquiries for 50 gigabytes more storage," he says.

Unused SAN resources are another everyday concern for storage system administrators. The Storage Networking Industry Association (SNIA) estimates that the average utilization of storage systems ranges from just 35 percent to a maximum of 50 percent. The storage space is available -- it's just in the wrong place.

Enter virtualization: a hot topic in the industry that aims to solve this problem of under utilization by distributing the disk space actually available in such a way as to meet the requirements (see Virtual Reality?).

In this context, a number of physical storage devices are seen as one or just a few logical storage resources, or "logical volumes." The host server also sees the logical (virtual) units as normal SCSI disks. The virtual drives can be adapted to the user's needs in terms of size, speed and failsafe security -- without any downtime, so the virtualization vendors claim.

All the configuration changes are carried out online without halting the server or applications. The user continues to work with his or her own logical volume and the data contained in it. Sounds like a dream, right?And it was: Pforzheim discovered that the companies selling this technology had a serious shortage of sufficient practical experience implementing virtualization solutions.

Eventually, the city discovered FalconStor Software Inc. (Nasdaq: FALC) and its in-band virtualization offering, IPStor. Tests began in December 2001. Using Intel Corp.'s (Nasdaq: INTC) Iometer tool, which analyzes server performance, Pforzheim's IT personnel generated a data load that exceeds the scope of the client/server environment. Operating at full capacity, redundant storage disks and RAID controllers were temporarily put out of service in order to simulate failure scenarios and test the response to these kinds of conditions.

"The interplay between virtualization components and FC storage is the crucial criterion," says Hurst. "Insufficient performance and scalability would have repercussions for the whole system." The virtualized storage was quickly rendered available; the IT team did not report any drops in performance with IPStor, Hurst says. It was only in the Fibre Channel environment that problems emerged, city IT officials noted, as the interoperability between numerous individual components -- cards, switches, and so on -- had to be examined in the individual case by the manufacturer despite the certification insurances.

Pforzheim adopted IPStor into daily use at the beginning of 2002 together with the components for the SAN. So far it has not shown any weaknesses and was put through a significant test recently when the city replaced a database server with 1.5 terabytes of integrated storage and transferred it to the SAN. The allocation of virtualized drives via the IPStor console took about 10 minutes, according to Hurst.

Pforzheim's IT infrastructure includes QLogic Corp. (Nasdaq: QLGC) switches and HBAs, IBM Corp. (NYSE: IBM) arrays, Sun Microsystems Inc. (Nasdaq: SUNW) and Microsoft Corp. (Nasdaq: MSFT) servers, Veritas Software Corp. (Nasdaq: VRTS) software and host of other pieces that make up the jigsaw. For a complete list check the attached table

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights