HTTP 1 vs HTTP 1.1 vs HTTP 2 vs HTTP 3

Deepak Rai
4 min readApr 2, 2022

HTTP 1

The Hypertext Transfer Protocol, or HTTP, is an application protocol that has been the de facto standard for communication on the World Wide Web. In 1996, HTTP/1.0 was formally introduced and acknowledged.

HTTP 1.0 was meant to utilize a new connection of TCP for each request, each appeal had to pay for the expense of establishing a new connection of TCP. Because most Internet transactions are brief and seldom progress beyond the slow-start stage, they do not make optimal use of available bandwidth. Although some version1.0 implementations utilized a “keep-alive” header to demand that the link can be kept alive, this did not work well with in-between proxy.

HTTP 1.1

By introducing permanent connections and pipelining, it solves this problem which was present in HTTP 1. HTTP/1.1 implies that a connection of TCP should be kept active unless explicitly requested to disconnect when using persistent connections. It permits the client to submit numerous appeals over the same connection without having to wait for each one to be answered, significantly increasing HTTP/1.1’s performance over HTTP/1.0.

Unfortunately, this optimization technique has an inherent bottleneck. Because several data packets just can’t pass each other on their way. there are times when a request at the front of the queue fails to get the resource it requires, causing all requests behind it to be blocked.

Main Differences Between HTTP 1.0 and HTTP 1.1

HTTP1.0 is mostly utilized in the header, but HTTP1.1 is used to introduce a more sophisticated cache management approach.
There is some bandwidth waste in HTTP1.0, but there is less bandwidth waste in HTTP 1.1.
The Host header field is supported by HTTP1.1 request and response messages, although HTTP1.0 thinks that each server should bind a distinct IP address.
In HTTP1.0, there is only one request and answer for each TCP connection, whereas HTTP 1.1 enables connection reuse.
Spriting, concatenating, inlining, and domain sharding are some of the optimizations utilized in HTTP 1.1, whereas HTTP 1.0 supports caching to serve websites quicker.

HTTP 2.0

HTTP/2 began as the SPDY protocol, developed primarily at Google with the intention of reducing web page load latency by using techniques such as compression, multiplexing, and prioritization.

Lets try to understand HTTP 1.0, HTTP 1.1 and HTTP 2.0 with an small example

The first response that a client receives on an HTTP GET request is often not the fully rendered page. Instead, it contains links to additional resources needed by the requested page. The client discovers that the full rendering of the page requires these additional resources from the server only after it downloads the page. Because of this, the client will have to make additional requests to retrieve these resources. In HTTP/1.0, the client had to break and remake the TCP connection with every new request, a costly affair in terms of both time and resources.

HTTP/1.1 takes care of this problem by introducing persistent connections and pipelining. With persistent connections, HTTP/1.1 assumes that a TCP connection should be kept open unless directly told to close. This allows the client to send multiple requests along the same connection without waiting for a response to each, greatly improving the performance of HTTP/1.1 over HTTP/1.0.

Unfortunately, there is a natural bottleneck to this optimization strategy. Since multiple data packets cannot pass each other when traveling to the same destination, there are situations in which a request at the head of the queue that cannot retrieve its required resource will block all the requests behind it. This is known as head-of-line (HOL) blocking, and is a significant problem with optimizing connection efficiency in HTTP/1.1. Adding separate, parallel TCP connections could alleviate this issue, but there are limits to the number of concurrent TCP connections possible between a client and server, and each new connection requires significant resources.In HTTP/2, the binary framing layer encodes requests/responses and cuts them up into smaller packets of information, greatly increasing the flexibility of data transfer.

Disadvantages of HTTP 2.0

The HTTP/2 server push feature is also known to be tricky for developers to implement and integrate into the existing applications.

While the HTTP/2 addressed the head-of-line blocking in HTTP/1.1 protocol, TCP-level blocking still causes issues. This will be quite improved by the HTTP/3 protocol.

Using concurrent requests increases server load thus leading to request timeouts.

For clients on a slow network connection, packets will gradually drop, and the quality of the network is degraded to a single HTTP/2 connection. This slows the entire process hence blocking loads of data transfer.

HTTP 3.0

While HTTP/1.1 and HTTP/2 are mainly ‘HTTP-over-TCP’, HTTP/3 is done over QUIC (Quick UDP Internet Connections).HTTP/3 is a newer, better, faster protocol. It’s a more modern solution that should deliver improved security and speed to the web. In case of HTTP 2.0, when there are multiple requests in single TCP connection and If there is packet loss for any one request then server has to wait for client to resend the loss packet in-order to form the whole request data object properly where as in-case of HTTP 3.0 server will go ahead and form the request object for all request except the one which has packet loss issues.

Advantages of HTTP 3.0

In standard HTTP+TLS+TCP, TCP needs a handshake to establish a session between server and client, and TLS needs its own handshake to ensure that the session is secured. QUIC only needs a single handshake to establish a secure session. As simple as that, you get the connection establishment time cut in half.

HTTP/2 was unable to solve the issue of latency in lossy and slow connections. To address this, QUIC provides a native multiplexing and the lost packets mainly impact the streams where data has been dropped rather than stalling the entire system.

--

--