Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

HTTP2’s Role in Solving Implementation Gaps and Improving UX

Network Monitoring
(Image: Pixabay)

In the beginning of web browsing as we recognize it today, complete with embedded graphics and blinky tags, there was NCSA Mosaic. The web was a simpler place. It was indexed mostly by hand for search, traffic was low and pages were straightforward. Most importantly, the number of objects on a page–meaning the number of components that need to be fetched in order to display it–were low. At the same time, the 14.4K baud modem was in common use, and connections would be dropped when a parent or roommate picked up the phone at the wrong time. That is to say, the web ran at a slower place and Mosaic had an easy time fetching and displaying the needed objects, and user expectations were low.

Following Mosaic, HTTP/1.x had been in use for many years and did not make the browser’s job any easier. Most notably, it could only send one request at a time on a connection and then wait for the response before it sent another. Browsers got around this problem by opening many connections to a server so it could open some work in parallel, but these individual connections still suffered from the same head-of-line blocking. The criticality of an object helped the browser decide if it should be sent early, or later.

Times (and expectations) have changed

Today, according to the HTTP Archive, mobile websites (not desktop) have a median 70 requests per page for a total of 1.7 MB. On top of that, more than 75 percent of those requests are over HTTPS (i.e. encrypted and authenticated) which means even more (very worthwhile) work for the little phone that could. Mobile browsers today have a tough job of providing users with the instantaneous experiences they expect. 

A significant part of browser development goes into the decision process to determine what the critical objects are on a page. Critical objects in short are the objects that are needed to start rendering the page. Poor prioritization of these critical objects, however, can lead to a jarring experience. For example, stylesheets can change how the page should flow, or even leave the users staring at a blank page as they wonder what goes on in the background. To further complicate this, it is not simply a case of “get the stylesheets first, then the javascript,” the objects can form a tree of complicated dependencies that only emerge as they load.

HTTP/2 addresses gaps with multiplexing and prioritization

HTTP/2 was developed in part to address the bottlenecking issues that HTTP/1.1 was unable to solve.. It’s core differentiator is that it provides a number of important tools for improving performance, such as multiplexing, prioritization, header compression and server push. Specifically, multiplexing and prioritization are most critical to addressing the rendering complications discussed above.

Multiplexing allows the browser to send many requests at the same time, and the server will respond in any order, even overlapping the returned data. Among other things, this makes the request side more efficient and gets the browser out from the head of line blocking that plagued HTTP/1.x. By not having to wait for the back and forth of requests and responses, the server can get more data down to the browser faster.

The necessary partner for multiplexing is prioritization. Prioritization provides the browser a method for telling the server which objects are more important than others and what the dependencies of those objects are. Without this, though the information might get from the server to the browser faster overall, the jumbled mess of critical and non-critical objects could result in the browser taking longer to start painting the page on the screen. Getting this to work relies on solid implementations in the browser and the server. Getting it wrong can lead to a degraded experience for users.

A great example of this can be seen in the work by Pat Meenan, creator of WebPageTest, and his HTTP/2 Priority test.  The test sets up a page that causes the browser to discover high priority resources after it has already sent lower priority resources. The proper behavior for the browser is to immediately send these high priority resources with the correct dependencies indicated. The server, in turn, should make certain those new requests are handled right away.  The results of the tests? Well, not good overall.

Andy Davies maintains a Github that tracks the results from various servers, proxies and content delivery networks (CDNs). The bottom line is very few of them perform well in this fairly common scenario. What does that mean for the user?  Well, it means they are getting suboptimal experiences today over HTTP/2. The protocol that was supposed to speed us up, is in cases slowing us down because of poor implementations. The good news is this will get fixed over time now that it has been identified. 

Looking ahead to HTTP/3

Protocols are developed to solve the network problems of today. The best are able to peer into a crystal ball and make solid predictions about the trends to come. Eventually, the infrastructure and our usage patterns change enough that they cannot keep up, and revising and re-inventing becomes necessary. Even the venerable TCP, without which none of us would be where we are today and it is about to be supplanted by the introduction of QUIC (now renamed HTTP/3) to better address the variabilities on the Internet. HTTP/2 is a great step forward on this path of progress−looking closely at it can provide great insights to just how far we have come and some thoughts on where we still need to go.