How to Handle Request Failures

There's no easy answers when it comes to handling HTTP request failures

May 6, 2004

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

This column is in response to a question from an NDCF reader. He wanted to know: Is there any way to handle 'in flight' HTTP request failures more transparently?" One example would be when a response to an HTTP request is sent fully from a Web server and the server crashes. The problem with this is that the request can fail at any point – both before and after it reaches the destination.

This is a very interesting question with a lot of nuances and shades of gray! The answer is neither a simple "yes" nor a "no."

The challenge in this kind of situation is that the surrounding components don't know how much of the request was digested and acted upon prior to the failure. As a result, it is possible that half a request was processed and some action started before the failure occurred. If a load balancer fronting the Web server were to send that request to another server after the failure, it is possible that an action gets duplicated. This could mean, for example, that a credit card gets billed twice.

In most Web applications, there are a few things that require a user's request to stay put:

  • The most common reason is that the application may not share user state across all of the servers.

  • The application may need to be written to handle the situation whereby a request that is half processed on one server can be continued on another. Things like database locks and interaction with third-party systems (for example, billing) would need to be accounted for.

  • The backup server may not know what, if any, traffic has already been returned to the end user. For all the backup server might know, the application had already sent all of its data and was about to close the connection.

  • Clients may either become impatient and press "reload," or the browsers might timeout altogether.

From a security standpoint, if an attacker figured out the magic sauce to make a Web application stall or break, the last thing you'd want is the load balancer to detect the server going down and retransmit the attacker's payload to every other Web server!It is worth noting that load balancers configured to health-check servers will automatically reroute new requests to the remaining servers. This can still cause a bit of grief for application users that were set to persist with the failed server, as they will need to log in again to re-establish their state.

The bottom line is that sometimes it is better to see an error message or broken page so that the fact that something went horribly wrong becomes obvious. When actions half worked, but continued, and then later broke down completely, they are a lot tougher to track down and fix because it isn't clear where the problem started.

That said, there are some instances where load balancers can and do provide limited recovery from some kinds of application infrastructure failures:

  • Some load balancers are able to separate the client connections from the server connections with a queue for requests and buffers for responses. These load balancers can replay a request that fits within single TCP packets to a different server if a failure occurs before the request message has been acknowledged by the server.

  • Load balancers with buffering can also buffer the response from the server and transmit the full response to the client even if the server fails after returning the response. This is possible only if the load balancer detects that the transaction has completed, and then decouples the client connection from the server connection, thereby insulating the client from the server failure.

  • Some load balancers in the past have provided a way to recover from back-end application failures by detecting when a response could not be generated and, hence, an error is returned. These load balancers provided a way to check the response for such errors and retry the request to another server. Such support may still be available from some load balancers.

It is architecturally possible to provide more advanced functionality than that addressed above. As such needs become more evident in the future, the load balancers with flexible request/response processing architectures (as described earlier) may evolve to provide more advanced recovery from server-side failures – while mitigating adverse side effects from any application layer attacks.

— Prabakar Sundarrajan, Chief Technical Officer and Executive Vice President, NetScaler Inc.0

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights