HTTP/2 vs HTTP/1

What is HTTP?

Hypertext Transfer Protocol (HTTP) is an application protocol that is, currently, the foundation of data communication for the World Wide Web.

HTTP is based on the Client/Server model. Client/Server model can be explained as two computers, each with their own IP address.

The client (receiver of service) and Server (provider of service) that are communicating via requests and responses.

A simple and abstract example would be a restaurant guest and a waiter. The guest (Client) asks (sends request) waiter (Server) for a meal, then the waiter gets the meal from the restaurant chef (your application logic) and brings the meal to the guest.

This is a very simplistic example, but it is also the one that will help you understand the concept.

 

What is HTTP/2?

In 2015, Internet Engineering Task Force (IETF) released HTTP/2, the second major version of the most useful internet protocol, HTTP.

Main goals of developing HTTP/2 was:

  • Protocol negotiation mechanism – protocol electing, eg. HTTP/1.1, HTTP/2 or other.
  • High-level compatibility with HTTP/1.1 – methods, status codes, URIs and header fields.
  • Page load speed improvements  trough:
    • Compression of request headers
    • Binary protocol
    • HTTP/2 Server Push
    • Request multiplexing over a single TCP connection
    • Request pipelining
    • HOL blocking (Head-of-line) – Package blocking

Benefits:

  • Low overhead in parsing data – a critical value proposition in HTTP/2 vs HTTP/1.
  • Less prone to errors.
  • Lighter network footprint.
  • Effective network resource utilization.
  • Eliminating security concerns associated with the textual nature of HTTP/1.x such as response splitting attacks.
  • Enables other capabilities of the HTTP/2 including compression, multiplexing, prioritization, flow control and effective handling of TLS.
  • Compact representation of commands for easier processing and implementation.
  • Efficient and robust in terms of processing of data between client and server.
  • Reduced network latency and improved throughput.

HTTP/2 Server Push

This capability allows the server to send additional cacheable information to the client that isn’t requested but is anticipated in future requests. For example, if the client requests for the resource X and it is understood that the resource Y is referenced with the requested file, the server can choose to push Y along with X instead of waiting for an appropriate client request.

If you remember the story about a guest in a restaurant and waiter, that would be an example for HTTP/1.1 and HTTP/2 protocol with a slight difference. Imagine that waiters are TCP connections and you want to order your meal and a bottle of water. For HTTP/1.1 that would mean that you ask one waiter for your meal and another one for water, hence you would allocate two TCP connections. For HTTP/2 that would mean that you ask only one waiter for both, but he brings them separately. You only allocate one TCP connection and that will already result with lower server load, plus the server would have one extra free connection (waiter) for the next client (guest).

The real difference between HTTP/1.1 and HTTP/2 comes with server push example.

 

Imagine that the guest (Client) asks (sends request) waiter (Server) for a meal, then the waiter gets the meal from the restaurant chef (your application logic), but the waiter also thinks you would need a bottle of water so he brings that too with your meal. The end result of this would be only one TCP connection and only one request that will significantly lower the server load.