You are currently viewing a snapshot of www.mozilla.org taken on April 21, 2008. Most of this content is highly out of date (some pages haven't been updated since the project began in 1998) and exists for historical purposes only. If there are any pages on this archive site that you think should be added back to www.mozilla.org, please file a bug.



HTTP/1.1 Pipelining FAQ

What is HTTP pipelining?

Normally, HTTP requests are issued sequentially, with the next request being issued only after the response to the current request has been completely received. Depending on network latencies and bandwidth limitations, this can result in a significant delay before the next request is seen by the server.

HTTP/1.1 allows multiple HTTP requests to be written out to a socket together without waiting for the corresponding responses. The requestor then waits for the responses to arrive in the order in which they were requested. The act of pipelining the requests can result in a dramatic improvement in page loading times, especially over high latency connections.

Pipelining can also dramatically reduce the number of TCP/IP packets. With a typical MSS (maximum segment size) in the range of 536 to 1460 bytes, it is possible to pack several HTTP requests into one TCP/IP packet. Reducing the number of packets required to load a page benefits the internet as a whole, as fewer packets naturally reduces the burden on IP routers and networks.

HTTP/1.1 conforming servers are required to support pipelining. This does not mean that servers are required to pipeline responses, but that they are required to not fail if a client chooses to pipeline requests. This obviously has the potential to introduce a new category of evangelism bugs, since no other popular web browsers implement pipelining.

When should we pipeline requests?

Only idempotent requests can be pipelined, such as GET and HEAD requests. POST and PUT requests should not be pipelined. We also should not pipeline requests on a new connection, since it has not yet been determined if the origin server (or proxy) supports HTTP/1.1. Hence, pipelining can only be done when reusing an existing keep-alive connection.

How many requests should be pipelined?

Well, pipelining many requests can be costly if the connection closes prematurely because we would have wasted time writing requests to the network, only to have to repeat them on a new connection. Moreover, a longer pipeline can actually cause user-perceived delays if earlier requests take a long time to complete. The HTTP/1.1 spec does not provide any guidelines on the ideal number of requests to pipeline. It does, however, suggest a limit of no more than 2 keep-alive connections per server. Clearly, it depends on the application. A web browser probably doesn't want a very long pipeline for the reasons mentioned above. 2 may be an appropriate value, but this remains to be tested.

What happens if a request is canceled?

If a request is canceled, does this mean that the entire pipeline is canceled? Or, does it mean that the response for the canceled request should simply be discarded, so as not to be forced to repeat the other requests belonging to the pipeline? The answer depends on several factors, including the size of the portion of the response for the canceled request that has not been received. A naive approach may be to simply cancel the pipeline and re-issue all requests. This can only be done because the requests are idempotent. This naive approach may also make good sense since the requests being pipelined likely belong to the same load group (page) being canceled.

What happens if a connection fails?

If a connection fails or is dropped by the server partway into downloading a pipelined response, the web browser must be capable of restarting the lost requests. This case could be naively handled equivalently to the cancelation case discussed above.

Darin Fisher, <darin at meer dot net>