Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Playbook: Staying One Step Ahead of Performance: Page 6 of 15

A longer-term solution to smoothing out the network, however, is the use of a traffic-shaper that imposes limits on the individual traffic flows or a higher-capacity WAN connection. You can also distribute the servers closer to your users so you bypass the WAN link, though this is obviously not an option for every application.

Another trade-off with TCP is that its sessions remain in a dormant state for about eight and a half minutes after they close, depending on the software vendor, to prevent laggard packets from being mistaken as a separate session. That's a long time for a request that might take less than one second to process. And each of these "dead" connections also requires memory and processing capacity, so it can burden systems with extremely high volumes of short-lived sessions.

Sometimes you can alleviate this by tweaking the protocol through the process of enabling HTTP 1.1 pipelining, for example, which lets the client reuse a single session for all requests or fetches. Rather than opening 20 sessions for all the embedded objects, it opens just one, so it consumes fewer resources on the server. HTTP 1.1 also lets the client (rather than the server, like with HTTP 1.0) close the session. Or you can instead institute a proxy agent that reuses sessions on behalf of the clients. This is a common approach with Web interfaces to an IMAP server, for instance: Its Web interface requires each message-access operation, such as deleting a message, to establish a new IMAP session. A popular solution to this problem is to implement an IMAP proxy that multiplexes the sessions on behalf of the Web interface. That limits the load on the real e-mail server.

Although most networked applications use TCP, a handful of high-profile applications, including VoIP, use UDP (User Datagram Protocol). UDP doesn't provide the reliability and flow-control that TCP does, so it's a popular protocol for streaming media signals, such as video and voice, where some small amount of packet loss is better than forcing the datastream to stop and resynchronize every time there's a minor hiccup in the network.

But UDP can cause other problems. Because UDP doesn't adjust the traffic rate according to the loss and delay characteristics of the underlying network, UDP traffic won't slow down in the face of congestion. Meanwhile, TCP sessions sharing the same pipe as UDP will slow down whenever packet loss is detected. That means TCP sessions will shrink to accommodate UDP. That's no big deal if all you care about is streaming media, but if you're trying to run mission-critical applications over the same network as your multimedia flows, knocking your TCP sessions off the network is not acceptable.