Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Playbook: Staying One Step Ahead of Performance: Page 7 of 15

There's no easy solution to the UDP-TCP performance dilemma. You can force the UDP application endpoints into using a lower frame rate or less bandwidth, which gives your TCP applications a fighting chance. The long-term solution is to make sure you have enough bandwidth to accommodate all your traffic.

Determining the appropriate server architecture, meanwhile, is a major piece of the performance-planning puzzle. Servers can be designed as multiprocessor systems, multiserver clusters or distributed hosts in a mesh. Each of these configurations has its own distinct benefits in different environments. Although you can't do much about the architecture a specific vendor endorses, understanding the benefits of its approach will help you decide whether to go with that vendor. Microsoft, for instance, doesn't provide mechanisms for multiple DHCP Windows servers across the network to communicate with one another, so you have to bundle everything into a multi-CPU host or create a Windows server cluster. Bottom line: Your application may drive your server processing architecture.

Multiprocessor systems generally are best for multithreaded applications, because passing threads to a local processor provides the fastest turnaround. However, if a system generates an extremely large number of threads, the overhead can kill the performance benefits. There, a cluster of distinct server hosts with locally contained processes is more scalable.

At the other end of the spectrum, distributing the workload into manageable server domains, such as departmental Web and e-mail, is efficient. The trade-off is higher management costs and more systems. Adding more hosts can also increase the demands on back-end systems, such as a database management system, if there's a lot of contention for it.

Off-loading some of the server processing to an add-on card can help. Network adapters with dedicated TCP processors or SSL add-on cards, for example, can reduce significantly the processing demands on a host or cluster by freeing up tasks the main processor would otherwise have to manage.

Another performance issue with servers is disk capacity. Generally, disk choice is driven by a need for high throughput of very large data sets or fast seek times for random data. This decision may not always be obvious. For example, servers that do nothing but serve a few very large files over relatively slow networks may be better equipped with very large RAM disks and a single disk, and may have no requirement for RAID setups beyond a simple mirror. Applications that benefit the most with a focus on throughput are those that host databases and multimedia, where uninterrupted reads and writes are commonplace and crucial to success.