Building Blocks

Read about the hardware and facilities decisions that brought NWC Inc. to life.

December 2, 2002

13 Min Read
Network Computing logo

NWC Inc. will sell products on the Web. To do that we needed a new two-ton air conditioner, a building security system, a UPS and power-distribution system and a new 100-amp circuit, new equipment racks, servers, switches, storage, routers, operating systems, a firewall and, most of all, talented people with varied skill sets. In the first part of this cover package, "A Start-Up Is Born", lab director Ron Anderson describes our business plan. In "Software Focus,", technology editor Lori MacVittie talks about the applications. Here, we give you the scoop on the gear that makes your friends envious--the high-tech hardware and its support infrastructure.

Where We Live

Choosing the facility proved interesting. We picked the lab because the location--conveniently in the same building as our existing Green Bay lab--the size of the suite--800 square feet--and the price were right. We're located in what is known quaintly as the "lower level," otherwise known as the basement. The building and property have been owned since the 1930s by the local Kos family, which also runs the management company, Kos Management. The site was, at one time, a chicken farm, complete with hatchery, henhouse and slaughterhouse. The main building is nearly 100 years old, and two additional structures are 30 and 50 years old, respectively.

Today, the building comprises mostly office space with a few walk-in businesses on the first floor. We've dubbed the decor, virtually unchanged from the 1970s, "little Swiss village." It's complete with mini cedar-shake roofs over the doors, fieldstone accents and mullioned windows.

The site is 50 feet from the cleverly named East River, and we were concerned about water getting into our lab. We checked with Kos Management and found that the building, despite its age, has a prodigious water-management system with redundant pumps. A closet in our suite contains one of the redundant sump pumps along with an impressive array of natural gas meters. The presence of not one but several gas mains has kept us from smoking the occasional surreptitious cigarette.When we moved in a year and a half ago, NWC Inc. was barely a twinkle in Lori MacVittie's eye. At the time, we had in-line booster fans added to the air conditioning and installed two extra air vents as well as air dump-outs into the hallway to prevent overpressure. For power, we had 14 additional 20-amp circuits added to the back half of the room.

This setup served us fine for about nine months, until the amount of equipment in the lab outgrew what the air conditioning could handle. In-line booster fans were added to the dump-outs, and that kept things comfortable, more or less. The hot air we pumped into the hallway warmed the rest of the basement all winter without help from the furnace.

When planning for NWC Inc. we knew that the existing air conditioning would not be able to keep up. We consulted building management and American Power Conversion for a solution. APC suggested an excellent data-center cooling system, but it was designed for precise temperature and humidity control. In our space, with its sump pump and interior walls that go only as high as the drop ceiling, APC's precision equipment would have been a mismatch because it would have been constantly fighting outside factors not in our control. Instead, we installed a two-ton air conditioning unit, with the compressor outside and the heat exchanger/air handler mounted above our drop ceiling. (One of the saving graces of this location is the huge amount of space above the ceiling.)The next issue was power. Until NWC Inc., we simply did not need redundant power anywhere except our communications rack. A rackmount APC 1400 was more than adequate to keep continuous power to our Quantum Snap 4100, DSL router, SonicWall firewall and Alteon WebSystems load-balancer. We consulted with APC again, and this time found an ideal solution: APC provided a PowerStruXure 12 kVa (kilovolt-ampere) UPS and four NetShelter VX racks. Sure, 12 kVa and four racks are indulgences now, but they ensure that we'll have room to expand.

The PowerStruXure required a 100-amp service unto itself, so we called in an electrician to wire us up a separate 100-amp service with a disconnect and about 10 feet of flex cable to let us move the rack housing the PowerStruXure around a bit. When the PowerStruXure and NetShelter VX racks came in, to our dismay, they didn't fit on the elevator. We ended up having the Otis Elevator repair guy remove the "headknocker" from the top of the elevator to get the racks in--they were simply too heavy to carry down the stairs. With the PowerStruXure safely in the lab, the electrician hooked up the UPS, the inspector inspected it and the facility was ready for NWC Inc.

Laying Network PipesWe encountered some problems on the connectivity front. To ensure sufficient bandwidth for our new systems, we needed an Internet connection for NWC Inc. separate from the connection that supplies the Green Bay Real-World Labs®. We occasionally do ugly, disruptive things to the lab network and its Internet connection while testing, and that simply would not do for NWC Inc. We also wanted to have a second Internet provider online in the event of a catastrophic failure--or in case the ISP providing access went belly up. We learned this lesson the hard way last year in the event of @link's precipitous demise, which left us with no Internet access except dial-up for nearly a month and a half.

First, we looked to our local Baby Bell, Ameritech/ SBC. We discovered that a T1 line was too expensive on an ongoing basis and that SBC's local offerings for DSL were inadequate. We eventually settled on a 1.5-Mbps symmetrical DSL line from Choice One Communications. The only problem was that--as with DSL installs in Green Bay--secondary providers (read: anyone who isn't SBC) have to wait for SBC to perform a preinstall inspection and then allocate space in the switch. The long and the short of this is that we are using our McLeodUSA Green Bay lab connection for NWC Inc. until our permanent connection is completed by SBC and Choice One. On the plus side, we have our own Class C block of IP addresses that Choice One is thrilled to route for us at no charge because it doesn't have to assign us any of its own IP addresses.On the server front, the first and easiest decision was to use rackmount servers. From there things got hairy.

First, we decided to use external storage. This meant that the servers could be relatively small. However, we needed to ensure that we'd have enough bandwidth and that the machines were reliable enough to handle our business environment. It was our intent to make sure each server, with the exception of the machines donated by Hewlett-Packard (hey, beggars can't be choosers), was powerful enough to handle the load of NWC Inc.'s applications for the next three years. Luckily, the HP machines were excellent, if slightly older, models and will be more than adequate for us now and in the foreseeable future.

All the machines we purchased or received as donations have dual hot-swappable power supplies and dual power-input sources (dual power input was a secondary consideration). We also made sure the main servers would feature dual Gigabit Ethernet NICs in a failover/load-balance mode to guarantee not only sufficient bandwidth but connectivity in the event of NIC failure. We purchased three 2U Dell 2650 servers in a dual-processor configuration with Gigabit copper NICs.

Inventory

We know what you're thinking: This all sounds great, but what did it cost? Although we did have some gear donated, we know that's not an option for you, so we've included a detailed cost breakdown. Salary and physical plant expenditures are based on the going rates in Green Bay.

We also purchased, as a Web server, a 1U Dell server with Red Hat Linux preinstalled. In addition, we received several server flavors from HP: a DL580 with four Intel Pentium III Xeon processors at 900 MHz; an ML570 with four Pentium III Xeon processors at 900 MHz; and, most impressive, a truly massive DL760 with eight 900-MHz Pentium III Xeon processors. Each HP machine has four 36-GB SCSI hard disks in a Level 5 RAID configuration. On every machine, the OS and application software reside on the local RAID, with data on the external NAS (network-attached storage) system.We received two NAS1000 NAS heads from Network Storage Solutions. These NAS heads feature 2.2-GHz Intel Xeon processors and 1 GB of RAM in a 1U form factor. We also received approximately 800 GB of Fibre Channel RAID 5 storage from Xyratex, which also supplied us with the StorageAppliance 1502 near-line storage box to be used as a backup device. The StorageAppliance 1502 has eight 160-GB IDE drives configured in a RAID 1 array with an included 3Ware Escalade ATA RAID controller. Once the current NWC Inc. environment is stable, we'll send this device to our Syracuse facility to act as off-site backup.

In the initial configuration, we decided not to use a tape solution, but instead to test the viability of disk-to-disk backup. If we find we need to add a tape solution, the StorageAppliance 1502 will act as an intermediate storage device for a disk-to-disk-to-tape backup system, reducing our backup window. The NAS devices have dual copper Gigabit Ethernet connections that plug into our NAS1000 switch. Using a pure NAS solution is a grand experiment. If it doesn't cut the mustard, we'll have to implement a SAN solution, but that's fodder for another article.When we considered network requirements, the vendor choice was pretty clear. Cisco Systems, despite Wall Street instability, still holds a dominant market share lead in the network infrastructure space. There are plenty of strong, viable options, but we wanted to mirror what most companies have done--install Cisco gear in the network core.

So the question became, what would it take to support NWC Inc.'s server, storage and Internet requirements? Because we were starting with seven servers, a few of them monsters, and knew that number would grow over time, we decided nothing less than Gigabit Ethernet connections would do.

Next, we looked at potential single points of failure in the core of the network and decided two switches were also a Day 1 requirement. No business can make money while waiting for a fried power supply to be replaced. This led us to use two Cisco 4500-series switches with Layer 3 supervisor engines. We filled those chassis with 24-port 10/100/1000Base-T RJ-45 autosensing modules to accommodate anything that needed a connection. The two switches were then trunked together to form the network core running at 4 Gbps. We're keeping things running at Layer 2 to start but have the horses to apply QoS and other Layer 3 functions, such as OSPF routing, if necessary.

We'll use some VLAN (virtual LAN) functionality to keep separate the production and development traffic, but our initial requirements were pretty basic. Sure, we could have run down to Circuit City and bought a low-end switch, but with a network foundation like that, you're building a house of cards. The trick is to choose an affordable network architecture that serves you from Day 1 to Day 100 and beyond. We tried not to overengineer a solution that would be costly and never fully used.On the network edge, we placed a pair of Cisco 7400-series routers just in front of the firewall. These will initially let us get on and off the Internet in a graceful fashion but could, in the future, support multiple ISP connections and secure links to business partners.

Things We're Keeping to Ourselves

Security is an increasingly high-profile concern for IT. With the majority of NWC Inc.'s revenue coming from online transactions, it one of our highest priorities as well. Not only must we secure purchases, we need to safeguard our customers' privacy. That's not only good business from a customer-relationship point of view, it's becoming increasingly apparent that companies that don't make a best-effort attempt to secure customer data will be held financially liable.

There was no discussion on whether to deploy a firewall--it was a given. But selecting the firewall was a challenge. While we initially favored Check Point Software Technologies' offerings, the additional hardware costs were prohibitive. Ultimately, we decided on a SonicWall solution, based on a lower TCO and staff familiarity with the product line.



Business Applications Labs Network
click to enlarge

We also designed our network with security in mind, leaving only the Web server in the DMZ and all other services routed to and managed by the firewall. But a firewall does not generally inspect packets at Layer 7, where most Web-based attacks are initiated. We wanted to avoid the Nimdas and Code Reds of the future, and while we can't stop them from attacking, we can stop them from propagating by employing an Apache Web server running on a Red Hat Linux server. We've locked down the server by removing nonessential services, allowing secure access only from specific servers for management purposes and applying security patches.Associate technology editor Steven J. Schuchart Jr. covers storage and servers for Network Computing. Previously he worked as a network architect for a general retail firm, a PC and electronics technician, a computer retail store manager, and a freelance disc jockey. Technology editor Lori MacVittie has been a software developer, a network administrator and a member of the technical architecture team for a global transportation and logistics organization. James Hutchinson is Network Computing's director of editorial content. Write to them at [email protected], [email protected] and [email protected], respectively.

• Avoid vendor up-sell. Settle on a network architecture that meets your business needs for the next two years and try to forgo the extra bells and whistles until you really need them.

• Servers with preconfigured OSs save no time. You'll reinstall anyway.

• Default cable management isn't acceptable.

• Measure elevators and doorways before ordering equipment. You might still need to remove elevator roofs, but at least you won't be surprised.

• Don't forget maintenance. Things break, so make sure to figure out what your business exposure is. If you're a 9-to-5 business, maybe 24x7x365 coverage is overkill.• If it appears complex, it will be easy. If it appears simple, it will take forever.

• To install or not to install? Installation by the vendor costs $$$; decide up front if you have the chops to do it yourself--it could save you major moola.

• Detailed and exhaustive planning from the onset is a pain in the butt, but you'll benefit from it down the road. The corollary is also true: Failure to plan will cost you down the road.

• Write it down; if you don't, you'll forget. There are just too many details to track during such an extensive project.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights