Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Building Blocks: Page 4 of 7

We received two NAS1000 NAS heads from Network Storage Solutions. These NAS heads feature 2.2-GHz Intel Xeon processors and 1 GB of RAM in a 1U form factor. We also received approximately 800 GB of Fibre Channel RAID 5 storage from Xyratex, which also supplied us with the StorageAppliance 1502 near-line storage box to be used as a backup device. The StorageAppliance 1502 has eight 160-GB IDE drives configured in a RAID 1 array with an included 3Ware Escalade ATA RAID controller. Once the current NWC Inc. environment is stable, we'll send this device to our Syracuse facility to act as off-site backup.

In the initial configuration, we decided not to use a tape solution, but instead to test the viability of disk-to-disk backup. If we find we need to add a tape solution, the StorageAppliance 1502 will act as an intermediate storage device for a disk-to-disk-to-tape backup system, reducing our backup window. The NAS devices have dual copper Gigabit Ethernet connections that plug into our NAS1000 switch. Using a pure NAS solution is a grand experiment. If it doesn't cut the mustard, we'll have to implement a SAN solution, but that's fodder for another article.

When we considered network requirements, the vendor choice was pretty clear. Cisco Systems, despite Wall Street instability, still holds a dominant market share lead in the network infrastructure space. There are plenty of strong, viable options, but we wanted to mirror what most companies have done--install Cisco gear in the network core.

So the question became, what would it take to support NWC Inc.'s server, storage and Internet requirements? Because we were starting with seven servers, a few of them monsters, and knew that number would grow over time, we decided nothing less than Gigabit Ethernet connections would do.

Next, we looked at potential single points of failure in the core of the network and decided two switches were also a Day 1 requirement. No business can make money while waiting for a fried power supply to be replaced. This led us to use two Cisco 4500-series switches with Layer 3 supervisor engines. We filled those chassis with 24-port 10/100/1000Base-T RJ-45 autosensing modules to accommodate anything that needed a connection. The two switches were then trunked together to form the network core running at 4 Gbps. We're keeping things running at Layer 2 to start but have the horses to apply QoS and other Layer 3 functions, such as OSPF routing, if necessary.

We'll use some VLAN (virtual LAN) functionality to keep separate the production and development traffic, but our initial requirements were pretty basic. Sure, we could have run down to Circuit City and bought a low-end switch, but with a network foundation like that, you're building a house of cards. The trick is to choose an affordable network architecture that serves you from Day 1 to Day 100 and beyond. We tried not to overengineer a solution that would be costly and never fully used.