NWC's Syracuse Labs Get a Makeover

Our Syracuse University Real-World Labs recently moved into new digs. Join Lab Director Ron Anderson as he takes you on a tour of the state-of-the-art facility.

September 23, 2005

15 Min Read
Network Computing logo

Time to get serious about our goals for the new facility. First, we wanted to maximize the potential for collaboration among Network Computing's and sister publication Secure Enterprise's six full-time Syracuse staff members and cadre of freelancers, all of whom had been split between two buildings at the university. The buildings were only a couple of hundred yards apart but, as anyone who's studied the psychology of community knows, that distance may as well have been a couple of hundred miles, especially when you consider Syracuse's annual eight-month glacial period. Second, we wanted a rock-solid infrastructure that included power, cooling and structured wiring that would carry our new data center into the next decade, and a work area designed to help us achieve the first goal.

APC's InfraStruXure Manager

Click to Enlarge

Power and Cooling

With the countdown under way, we had less than seven months to plan and execute the infrastructure for a new data center. We had worked on power and cooling nuts and bolts in 2003 but recognized that plenty had changed in the intervening years. We started looking for a ready-made setup that could be designed and installed in quick order. Enter American Power Conversion.

New Setup at Syracuse University Real-World LabsClick to Enlarge

We've followed APC closely over the years and had heard its claim that it had data-center offerings for both power and cooling that were scalable yet could be up and running quickly. We've been using APC racks and UPSs in our labs for a number of years on a small scale and have always been impressed with the quality of the equipment. Conversations with APC about our new data center yielded some promising plans.

Our design included a 40-kW UPS with a PDU (power distribution unit) capable of delivering up to 5 kW of power to each of nine equipment racks, with a separate power feed to freestanding equipment on some bread racks that we planned to move into the new facility (see the diagram on page 56). Construction engineers at Syracuse University requested that we install equipment that could be fed via a three-phase 480V service. This was no problem because APC supports 208V, 480V and 600V three-phase feeds through its Type B Infrastruxure line.

Because we were putting 40 kW of power into the data center, we needed 40 kW (12 tons) of cooling to remove the heat generated by the equipment. The building uses chilled water for cooling, and APC had a cooling technology that fit our needs to a tee, the NetworkAIR IR. "IR" stands for "In Row," which means that the cooling system integrates directly into the row of racks. The NetworkAIR IR takes up three racks' worth of footprint on the data center floor and is designed to pull air out of a "hot aisle," or rear of the equipment racks, and distribute cool air to the front of the racks where the equipment needs it. We chose the model that runs off of a 480V utility feed but is backed up by a 208V feed from the UPS in the event of a utility power failure.

APC supplied utility rough-in specs to the contractors, and we scheduled the equipment to arrive about three weeks before our move. We estimated that this would give the contractors time to get the data center flooring in and some paint on the walls while giving APC time to install the equipment before the move. We also wanted a week between the equipment installation and the move to make sure that the UPS/PDU and cooling equipment were operating to specifications.APC did a site survey to ensure everything would fit on the freight elevator and through doorways. The contractor assured us and APC that a tractor/trailer could get to the loading dock, so it seemed that we were all set.

Turns out that a small tractor/trailer could get to the loading dock, but the very big rig carrying our three tons of equipment could only get within 50 feet of the dock because of tight turns and small spaces. Gordy Carlson, the on-site construction manager for the university, saved the day by firing up a large diesel-powered telescopic material handler and moving the pallets from the truck to the loading dock. APC's movers took the equipment from the loading dock to the data center and placed the UPS, PDU and NetworkAIR in their final positions so the building contractor's electrician and plumber could hook up the building utilities in preparation for start-up by APC.

APC scheduled its start-up using two different groups of specialists, one for power and one for cooling. It didn't take long after the technicians arrived to realize that, in typical geek fashion, the building plumber and electrician were averse to opening the installation manual--they tried to wing it and missed the mark. The chilled-water piping and the electrical conduit into the cooling unit needed to be redone so the rest of the unit could be assembled properly. That would take time to schedule, so the cooling guy left.

The APC power guys spent a full day unpacking a seemingly endless supply of boxes and assembling all of the bits and pieces. By the end of the day, they flipped the switch supplying 40 kW of UPS protected power to the two rows of equipment racks in the data center.

Meanwhile, we got the plumber and electrician in to rip out the piping and conduit so we could assemble the various pieces of the cooling system. Within a few days and with some additional gray hair, we had the utilities rerun, and a return visit from APC led to an uneventful NetworkAIR start-up.Our APC power and cooling system includes intelligent devices tied into a purpose-built management system. The last item on the agenda after the power flowed to the equipment was some end-user training on the ISX Manager software. ISX Manager auto-discovered our UPS, our PDU, each of the rack PDUs and the three environmental monitors we had installed. It had problems discovering the NetworkAIR IR, but a call to APC technical support got that resolved. The management interface lets us drill down into each device and sends alerts when data exceeds global thresholds. We also benefit from APC's remote monitoring. When an alarm is triggered within any of the devices in the ISX Manager, the remote-monitoring service notifies the contacts we designated by phone and e-mail.

Structured Cabling

We needed a flexible wiring plan that would let us connect any device in the data center or in our work areas to any network we maintain. We also needed to tie each device in the equipment racks into our digital KVM environment. To solve this problem we adopted a "data closet" model, implementing a home-run wiring scheme to racks 4 and 5, which house all of our network switches and routers as well as our digital KVM switches.

We ran 20,000 feet of Berk-Tec LANmark 1000 Category 6 Ethernet cable to connect each equipment rack to our "data closet" racks. The data center bread-rack patch panel and each of the work area patch panels were run to the data closet racks as well. Racks 4 and 5 share a total of 624 network connections.

Our structured cabling installers, Matrix Communications Group, needed additional room to run 312 data cables into the top of each data closet rack, so they cut 8"x12" holes in the top of each rack and lined the sharp edges of the cutout with split plastic cable looms to protect the Cat 6 cable. As we checked on the progress of the structured cabling during the installation, we were impressed with the care Matrix took while routing the cable bundles between and within each rack enclosure. We're fussy about doing a job the right way, but Matrix was even fussier. Before we put the structured cabling job out to bid, we had already solicited feedback from previous Matrix customers, so we weren't surprised by the quality of the work. Finally, before turning the finished job over to us, Matrix tested each of the 624 cable runs to ensure they met the Telecommunications Industry Association Cat6 standard.All cable runs were terminated at Ortronics Clarity6 Cat 6-rated patch panels (www.ortronics.com). We used high-density, 96-port panels for most of our connections in the two data-closet racks, freeing up space in the racks for active electronics. Three 96-port patch panels were installed in the front of each of the data closet racks, and a 96-port plus a 48-port panel were installed in the rear of each of these racks. We needed most of our connections in the front because our network equipment has front-mounted RJ-45 jacks. Our digital KVM switches, on the other hand, have their connectors in the rear, thus the patch panels in the rear of racks 4 and 5 are used for connecting equipment to the KVM switches.

Each equipment rack (1, 2, 3, 6, 7, 8 and 9) contains one 48-port and one 24-port Ortronics Cat 6 patch panel. Forty-four of these ports terminate in the front of our data closet racks, and 28 terminate in the rear.

Because our data center is not on a raised floor, we have cable trays and ladders mounted above the racks to provide paths for the cabling. This is a new experience for us--our old labs were located in machine rooms with raised flooring. In a lab environment that changes daily, it's tough to keep patch cables organized. We're still looking for a foolproof method.

Collaboration

It's only been a couple of months, so it's still too early to gauge the full ramifications of the new facility. But all of the early signs indicate we're well on our way to achieving our goals of a state-of-the-art lab and better communication among our staff members and freelancers. Certainly, our new facility is an order of magnitude better than our old digs. We've got a robust, scalable infrastructure with plenty of room to grow and all the bells and whistles of a modern day data center. We have windows, cell phone coverage, comfortable furniture and, even though we're still living out of boxes, a better work area than we've had in years. Thanks, Syracuse University!Even more important than all these creature comforts is the fact that we're all together. We see each other, we talk around the water cooler, we bounce ideas off each other during casual conversations that had to be planned during our previous, separate existence.

Ron Anderson is Network Computing's lab director. Before joining the staff, he managed IT in various capacities at Syracuse University and the Veteran's Administration. Write to him at [email protected].

While designing our new data center during the first half of 2005, we also planned and implemented a new WAN infrastructure that would offer us better security and manageability. Our design goals for the new WAN were VPN connectivity between our Real-World Labs® in Green Bay, Wis., and Syracuse; high-availability for our production equipment in Syracuse; a hub-and-spoke model for VPN connectivity between our technology editors and the labs; and the ability to manage all the devices from a single console. After evaluating a number of options, we chose to implement a system from SonicWall that included the Pro 3060, Pro 2040 and TZ 170 Wireless VPN/Firewall appliances; SonicWall's Global VPN client for mobile users; and SonicWall's Global Management System.

SonicWall's system made sense for us from a number of perspectives. First, many of our technology editors use SonicWall's gateways in their home offices, and those products have provided years of mostly trouble-free service. When there were problems, technical support was quick to provide solutions. And because the management interfaces for devices in SonicWall's product line are nearly identical, we needed little retraining. Finally, we wanted a single vendor for the appliances in the two labs and in our home offices, and SonicWall had the devices we needed for both, including integrated wireless access points for our home users. As an added bonus, the lab network and home appliances all run SonicWall's SonicOS Enhanced, which provides the foundation for some optional add-ons that we wanted to layer into our overall security infrastructure, including gateway antivirus, intrusion prevention and anti-spyware services.

We started implementing the new system in January with a small pilot that included putting one of our Syracuse lab networks behind a Pro 3060 and two technology editors' home networks behind TZ 170 Wireless firewalls. We also implemented the management system to get experience managing the WAN infrastructure prior to a full rollout. The pilot was complicated by the fact that SonicOS Enhanced 3 was still in beta on the TZ 170 Wireless. Being the brave souls (read: idiots) that we are, we decided to move forward with the beta software for the pilot because we were planning to use 3.x when we went to production. Hey, gold code was to be released in February, so it couldn't be too bad, right?There's a reason why they call it beta software. The biggest problem we faced is that the VPN tunnels would suddenly disappear. Each side of the tunnel would think that everything was fine, but no traffic would pass through. A cold restart on the TZ 170 side would fix the problem, but it drove us crazy. We made it through the pilot by the end of April with a few more gray hairs but otherwise relatively unscathed and rolled out our production implementation over the next couple of months after version 3 shipped. We've been cruising along smoothly ever since.

For our production environment we run the Pro 3060 in Syracuse in a high-availability configuration because a number of production services, including our editorial e-mail and nwc.com's primary DNS server, live behind that firewall. Two identical 3060s are tied together to eliminate a single point of failure at the firewall. If the active unit fails, the other takes over in a few heartbeats.

The lab in Green Bay was set up with a Pro 2040. We configured each of the home-office TZ 170 Wireless VPN/Firewalls in the Syracuse lab before sending the units out to the technology editors, so it was a plug-and-play operation on their part. Each unit was configured with a VPN tunnel preconfigured between the TZ 170 and the lab SonicWall, either the Syracuse Pro 3060 or the Green Bay Pro 2040. The tunnels are configured to use the MAC address of the unit as an identifier with "keep alive" running on the home-office units so their IP addresses can change and the tunnel will still reconnect. Because most of our technology editors have cable-modem service with DHCP addressing, this was an important consideration. Previously, we had to change firewall ACLs manually every time a home-office IP address changed. Now the home-office firewall itself is the determining factor, not the IP address du jour.

On the software side, the GMS lets us create additional VPN tunnels among devices with a few clicks of a mouse. SonicWall refers to these as "Interconnected" tunnels, and we can set them up and tear them down on the fly depending on which editor is collaborating and which lab they happen to be using. The biggest bonus, though, is en mass management. Whether it's upgrading firmware or adding an address object, we only need to do it once at the management console, and all the changes are propagated to all the devices at whatever time we choose.

Finally, traveling users are well served by SonicWall's Global VPN Client. No matter where we go we can easily establish a VPN connection to a Network Computing network and do our work.Technology Editor Mike DeMaria invites you to take a truly virtual tour of the new Syracuse University Real World Lab. Explore the panoramic view of the facilities, then watch a slideshow of the labs progress and construction (Please install QuickTime to view the movies).

The following is a QuickTime VR panoramic of the new lab. Click on the image below to start the tour. Each panoramic is about 2MB. It may take your browser several seconds to load the QuickTime plugin and render the movie. If you are unfamiliar with the QTVR interface, here are the basic controls:

  • Click and drag inside the panorama to move around. You must hold down the mouse button to move the camera around. You can move left or right, and if you zoom in, can move up and down. The arrow keys can also be used.

  • Hold down the shift key to zoom in.

  • Hold down the control key to zoom out.

  • In certain locations in the movie, usually around doors or corners, the cursor will change into a hand pointing at a globe. These are called hotspots. Click here, and you will jump to a different location in the lab. There are a total of six panoramic; three in the office area and three in the machine room.

  • A single click, outside of a hotspot, will return you to the starting position.

There is a controller bar below the panoramic. The "-" button zooms out, the "+" zooms in. The "arrow with a question mark" button will display the hotspot locations. If you're having trouble finding the hotspots, click on this icon.

Lab Construction Slideshow
This slideshow is approximately 18MB. You can start playing the video before the download completes.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights