In this video, we'll compare HTTP/2 and HTTP/3
protocols. We'll use terraform and ansible to create
infrastructure in the Google Cloud Platform tnen configure and compile nginx from the
source. For the first test, we'll use a plain HTTML
page with a bunch of images. For the second, more realistic test, I used
my own website with lots of images and heavy on javascript. To automate the tests, we'll use the playwright
nodejs framework and chromium headless browser. And, of course, we push metrics to the prometheus
pushgateway from the tests. Now you can measure lots of staff, for example,
with a lighthouse, but to simplify this test and make it short. We'll use the PerformanceNavigationTiming
API interface and measure only page load time. From the startTime to loadEvenEnd. By using the same API, you can get more detailed
metrics for different events during the browser load. Of course, to visualize metrics side by side,
we'll use Grafana and pull metrics from the prometheus. The major difference between those two protocols
is that HTTP/3 uses quic transport protocol, which is based on UDP. So when you create firewalls, don't forget
to open the 443 UDP port and not only TCP. For GCP in more or less large projects, you
always create shared VPC in a dedicated host project and share it with other service projects. To configure firewalls, you can use CIDR ranges,
network tags, or the recommended option is to create a dedicated service account and
use it as a source or target. Now the easiest way to check the protocol
is to click inspect and open the network tab. For HTTP/3, you should see h3 and h2 for HTTP/2. Unfortunately, the Nginx HTTP/3 implementation
is not yet in the mainline branch. If you want to try it out, you need to clone
this branch or download the corresponding tar archive. As you can see, it's under active development. In the next step, you need to download a few
dependencies to compile nginx, including one of the openssl implementations. In this case, I use libressl. Then we configure the nginx and run make to
compile. Then copy some files and start nginx. You can find the source code in my github
repository. You can also use ansible dynamic inventory
to run the playbooks in GCP. In the playbook, you can use the VMs labels
as ansible host groups. For the HTTP/2, we need to explicitly enable
it and also only for the test disable cache on the browser side. In both HTTP/2 and 3, we use TLSv1.3 since
HTTP/3 depends on it. For HTTP/3, we use the http3 directive along
with reuseport. It enables the kernel to have more socket
listeners for each socket. And the main mechanism to upgrade to HTTP/3
is to use the following headers. It also depends on the implementation. For example, h3-27 is the draft version of
the HTTP3 protocol. As I mentioned, to run tests, we'll use the
playwright framework with the prometheus client to push metrics to the pushgateway. The test is not very pretty, but it does the
job. Spin up the new instance of the headless chrome
browser, then load the page, send metrics to the pushgateway and quit. And repeat this as many times as you want. In this project, I use the us-central-1 GCP
region, and I am physically located in California just to give you a perspective on the latency. To record the metrics, you also need to have
prometheus, passageway, and grafana. To run it, use the docker-compose up command. Before we run the test, let me show you what
the first page looks like. It's a simple page with lots of images. Alright, let's go ahead and run the test. You can immediately notice that the HTTP3
version takes a little bit more time to load this simple page than HTTP2. It's not only me who noticed the degradation
in performance for HTTP3 nginx implementation. You can find other benchmarks. Now, as I said, it's very early for the Nginx
to support HTTP3. I'm confident they will improve the performance
and move it to the mainline. Then I test it again. To confirm that we actually use the HTTP3
protocol, you can check the Nginx access logs. You should find the HTTP3 version. For the second test, I use my own personal
website with lots of javascript and images. Let's go ahead and run it. Unfortunately, the page load time difference
is even higher. Keep in mind since it's under active development,
each new commit can affect the performance. You should test it for yourself if you want
to get with it in production or wait till it graduates and moves to the mainline and
stable branches of the nginx. I have a playlist with other benchmarks that
you may find interesting. Thank you for watching, and I'll see you in
the next video.