I. research objectives
The concept of HTTP2 has been put forward for quite a long time, and articles about http2 on the Internet have also been searched. However, judging from the results of the search, most of the existing articles are biased towards the introduction of http2, and few of them are actually specifically analyzed from the data. This article is just for the purpose of filling this vacancy. Through a series of experiments and data analysis, it makes an in-depth study on the performance of http2. Of course, due to my limited skills, the method used in the experiment will definitely have some shortcomings. If you see any problems, please ask me, I will definitely try my best to revise and perfect the method of the experiment!
II. Basic Knowledge
For the basic knowledge of HTTP2, please refer to the following articles, which will not be repeated here.
Through studying relevant materials, we have a general understanding of HTTP2. Next, we will design a model to test the performance of HTTP2 experimentally.
III. Experimental Design
Setting up an experimental group: build an HTTP2(SPDY) server that can respond to requests in HTTP2. At the same time, the content size of the response and the delay time of the response can be customized.
Setting up a control group: set up an HTTP1.x server to respond to the request in the form of HTTP1.x, and its customizable content is the same as that of the experimental group. In addition, in order to reduce errors, the HTTP1.x server useshttps
Agreement.
Test process: The client initiates requests to the server of the experimental group and the server of the control group respectively by setting parameters such as the content size of the response, the number of request resources, delay time, uplink and downlink bandwidth, etc., and counts the time required for completing the response.
Due to
nginx
Switching to http2 requires upgradenginx
Version and obtaining https certificate, and the operation links involved in various custom settings on the server side are relatively complicated, so they are abandoned under comprehensive consideration.nginx
As an experimental server, we adoptedNodeJS
Plan. In the initial stage of the experiment, nativeNodeJS
collocationnode-http2
The module was built as a server and later changed to use.express
Frame collocationnode-spdy
Module building. The reason is that the originalNodeJS
The processing of complex requests is very complicated.express
The framework has made a series of optimizations on requests and responses, which can effectively reduce human errors. In additionnode-http2
Module failed to connect withexpress
The framework is compatible and its performance is comparablenode-spdy
The module is also lower (General performance, node-spdy vs node-http2 #98), andnode-spdy
The function of the module andnode-http2
The modules are basically the same.
1. Server Construction
The server logic of the experimental group and the control group is completely consistent, and the key codes are as follows:
app.get('/option/?' , (req, res) => {
allow(res)
let size = req.query['size']
let delay = req.query['delay']
let buf = new Buffer(size * 1024 * 1024)
setTimeout(() => {
res.send(buf.toString('utf8'))
}, delay)
})
The logic is to dynamically set the size and delay time of response resources according to the parameters passed in from the client.
2. Client Building
The client can dynamically set the number of requests, the number of resources, the size of resources, and the server delay time. At the same time, with Chrome developer tools, different network environments can be artificially simulated. After the response to the resource request is completed, the total elapsed time is automatically calculated. The key codes are as follows:
for (let i = 0; i < reqNum; i++) {
$.get(url, function (data) {
imageLoadTime(output, pageStart)
})
}
The client makes multiple requests for resources through loops, the number of which can be set. Every cycle will passimageLoadTime
Update the time to realize the function of time statistics.
3. Experimental Project
a. http2性能研究
By studying the content of chapter 2, we can attribute the performance impact factors of http2 to “delay” and “number of requests”. In this experiment, “resource volume” and “network environment” are added as influencing factors, and detailed test experiments will be carried out for these four items. Each experiment will be repeated 10 times and the average value will be recorded.
b. 服务端推送研究
Http2 also has a very special function-server push. Server push allows the server to actively push resources to the client. This experiment will also focus on this function, mainly studying the usage of server push and its impact on performance.
IV. HTTP2 Performance Data Statistics
1. Influence of Delay Factors on Performance
Conditions/number of experiments | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
Delay time (ms) | 0 | 10 | 20 | 30 | 40 |
Number of resources (months) | 100 | 100 | 100 | 100 | 100 |
Resource size (MB) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
Statistical time (s)http1.x | 0.38 | 0.51 | 0.62 | 0.78 | 0.94 |
Statistical time (s)http2 | 0.48 | 0.51 | 0.49 | 0.48 | 0.50 |
2. Impact of Number of Requests on Performance
From the previous experiment, we can know that when the delay is 10ms, the time statistics of http1.x and http2 are similar, so the delay time in this experiment is set to 10ms.
Conditions/number of experiments | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
Delay time (ms) | 10 | 10 | 10 | 10 | 10 |
Number of resources (months) | 6 | 30 | 150 | 750 | 3750 |
Resource size (MB) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
Statistical time (s)http1.x | 0.04 | 0.16 | 0.63 | 3.03 | 20.72 |
Statistical time (s)http2 | 0.04 | 0.16 | 0.71 | 3.28 | 19.34 |
Increase the delay time and repeat the experiment:
Conditions/number of experiments | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|
Delay time (ms) | 30 | 30 | 30 | 30 | 30 |
Number of resources (months) | 6 | 30 | 150 | 750 | 3750 |
Resource size (MB) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
Statistical time (s)http1.x | 0.07 | 0.24 | 1.32 | 5.63 | 28.82 |
Statistical time (s)http2 | 0.07 | 0.17 | 0.78 | 3.81 | 18.78 |
3. Influence of Resource Volume on Performance
Through the above two experiments, we can know that when the delay is 10ms and the number of resources is 30, the time statistics of http1.x and http2 are similar, so the delay time in this experiment is set to 10ms and the number of resources is 30.
Conditions/number of experiments | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
Delay time (ms) | 10 | 10 | 10 | 10 | 10 |
Number of resources (months) | 30 | 30 | 30 | 30 | 30 |
Resource size (MB) | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 |
Statistical time (s)http1.x | 0.21 | 0.37 | 0.59 | 0.68 | 0.68 |
Statistical time (s)http2 | 0.25 | 0.45 | 0.61 | 0.83 | 0.73 |
Conditions/number of experiments | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|
Delay time (ms) | 10 | 10 | 10 | 10 | 10 |
Number of resources (months) | 30 | 30 | 30 | 30 | 30 |
Resource size (MB) | 1.2 | 1.4 | 1.6 | 1.8 | 2.0 |
Statistical time (s)http1.x | 0.78 | 0.94 | 1.02 | 1.07 | 1.13 |
Statistical time (s)http2 | 0.92 | 0.86 | 1.08 | 1.26 | 1.33 |
4. Influence of Network Environment on Performance
Through the above two experiments, we can know that when the delay is 10ms and the number of resources is 30, the time statistics of http1.x and http2 are similar, so the delay time in this experiment is set to 10ms and the number of resources is 30.
Conditions/Network Conditions | Regular 2G | Good 2G | Regular 3G | Good 3G | Regular 4G | Wifi |
---|---|---|---|---|---|---|
Delay time (ms) | 10 | 10 | 10 | 10 | 10 | 10 |
Number of resources (months) | 30 | 30 | 30 | 30 | 30 | 30 |
Resource size (MB) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
Statistical time (s)http1.x | 222.66 | 116.64 | 67.37 | 32.82 | 11.89 | 0.87 |
Statistical time (s)http2 | 138.06 | 71.02 | 40.77 | 20.82 | 7.70 | 0.94 |
V. HTTP2 Server Push Experiment
This experiment mainly focuses on the influence of network environment on the push speed of server. In this experiment, the requested/pushed resource is a 290Kb JS file. The experiment will be repeated ten times in each network environment, and the average value will be filled in the form.
Conditions/Network Conditions | Regular 2G | Good 2G | Regular 3G | Good 3G | Regular 4G | Wifi |
---|---|---|---|---|---|---|
Total client request time | 9.59 | 5.30 | 3.21 | 1.57 | 0.63 | 0.12 |
Total server push time (s) | 18.83 | 10.46 | 6.31 | 3.09 | 1.19 | 0.20 |
Resource load speed-client request | 9.24 | 5.13 | 3.08 | 1.50 | 0.56 | 0.08 |
Resource loading speed-server push (s) | 9.28 | 5.16 | 3.09 | 1.51 | 0.57 | 0.08 |
Conditions/Network Conditions | No Throttling |
---|---|
Total client request time (ms) | 56 |
Total push time of server (ms) | 18 |
Resource load speed-client request | 15.03 |
Resource loading speed-server push (s) | 2.80 |
From the above table, we can find a very strange phenomenon. After network throttling is turned on (including Wifi option), the speed of server push is far lower than that of common client requests, but after network throttling is turned off, the speed advantage of server push is very obvious. In the Wifi option of network throttling, the download speed is 30M/s and the upload speed is 15m/s. However, the actual download speed of the network used for the test is only 542K/s and the upload speed is only 142K/s, which is far from the speed of the network throttling Wifi option. In order to analyze this reason, we need to understand the principle of “server push” and where to store the pushed resources.
The common client request process is as follows:
The process of server push is as follows:
From the above schematic diagram, it can be seen that the server push can accompany the resources needed by the client.index.html
It is sent to the client together, eliminating the step of the client repeating the request. Just because there are no operations such as initiating requests and establishing connections, static resources can be pushed by the server to greatly improve the speed. But here is another question. Where are these pushed resources stored? Referring to this articleIssue 5: HTTP/2 PushLater, finally found the reason. We can take a closer look at the schematic diagram of the server-side push process:
The resources pushed by the server will be placed in a place between a network and http cache, which can be understood as “local” here. When the client sendsindex.html
After parsing, will toLocalRequest this resource. Since the resources have been localized, the request is very fast, which is also one of the performance advantages of server-side push. Of course, this localized resource will return a 200 status code instead of something similar.localStorage
304 or200 (from cache)
Status code. Chrome’s network throttling tool,Throttling will be added between any “network request”. Chrome will treat it as a network request because the static resources pushed by the server also return 200 status codes.This led to the problems seen in the above experiments.
VI. Research Conclusions
Through the above series of experiments, we can know that the performance advantages of http2 are mainly embodied in “multiplexing” and “server push”. When the number of requests is small (less than about 30), the performance difference between http1.x and http2 is not large. only when the number of requests is large and the delay is greater than 30ms can the performance advantage of http2 be realized. For environments with poor network conditions, the performance of http2 is also higher than that of HTTP 1. x. At the same time, if static resources are all processed through server-side push, the loading speed will be greatly improved.
In practical applications, due to the advantages of http2 multiplexing, the front-end application team does not need to adopt methods such as merging multiple files into one and generating sprite maps to reduce network requests. In addition, http2 has little impact on front-end development.
The server upgrades http2, if it usesNodeJS
Plan, only need to putnode-http
Module upgrade tonode-spdy
Module, and add the certificate.nginx
For the plan, please refer to this article:Open Source NGINX 1.9.5 Released with HTTP/2 Support
If you want to use server-side push, you need to expand the response logic on the server side, which needs to be analyzed and implemented according to the specific situation.
VII. Postscript
I know that I have to do it myself. If it weren’t for the actual design and experiment, I probably wouldn’t have known that http2 also had holes, and there were also some points to note when debugging with Chrome.
I hope this article can be of some help to students studying http2. As mentioned at the beginning of the article, if you find any problems in my experimental design, or if you think of a better experimental method, please also ask me. I will certainly study your suggestions carefully!
The source code required for the experiment is attached below:
1. Client Page
<! -- http1_vs_http2.html -->
<! DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>http1 vs http2</title>
<script src="//cdn.bootcss.com/jquery/1.9.1/jquery.min.js"></script>
<style>
.box {
float: left;
width: 200px;
margin-right: 100px;
margin-bottom: 50px;
padding: 20px;
border: 4px solid pink;
font-family: Microsoft Yahei;
}
.box h2 {
margin: 5px 0;
}
.box .done {
color: pink;
font-weight: bold;
font-size: 18px;
}
.box button {
padding: 10px;
display: block;
margin: 10px 0;
}
</style>
</head>
<body>
<div class="box">
<h2>Http1.x</h2>
<p>Time: <span id="output-http1"></span></p>
<p class="done done-1">× Unfinished...</p>
<button class="btn-1">Get Response</button>
</div>
<div class="box">
<h2>Http2</h2>
<p>Time: <span id="output-http2"></span></p>
<p class="done done-1">× Unfinished...</p>
<button class="btn-2">Get Response</button>
</div>
<div class="box">
<h2>Options</h2>
<p>Request Num: <input type="text" id="req-num"></p>
<p>Request Size (Mb): <input type="text" id="req-size"></p>
<p>Request Delay (ms): <input type="text" id="req-delay"></p>
</div>
<script>
function imageLoadTime(id, pageStart) {
let lapsed = Date.now() - pageStart;
document.getElementById(id).innerHTML = ((lapsed) / 1000).toFixed(2) + 's'
}
let boxes = document.querySelectorAll('.box')
let doneTip = document.querySelectorAll('.done')
let reqNumInput = document.querySelector('#req-num')
let reqSizeInput = document.querySelector('#req-size')
let reqDelayInput = document.querySelector('#req-delay')
let reqNum = 100
let reqSize = 0.1
let reqDelay = 300
reqNumInput.value = reqNum
reqSizeInput.value = reqSize
reqDelayInput.value = reqDelay
reqNumInput.onblur = function () {
reqNum = reqNumInput.value
}
reqSizeInput.onblur = function () {
reqSize = reqSizeInput.value
}
reqDelayInput.onblur = function () {
reqDelay = reqDelayInput.value
}
function clickEvents(index, url, output, server) {
doneTip[index].innerHTML = '× Unfinished...'
doneTip[index].style.color = 'pink'
boxes[index].style.borderColor = 'pink'
let pageStart = Date.now()
for (let i = 0; i < reqNum; i++) {
$.get(url, function (data) {
console.log(server + ' data')
imageLoadTime(output, pageStart)
if (i === reqNum - 1) {
doneTip[index].innerHTML = '√ Finished!'
doneTip[index].style.color = 'lightgreen'
boxes[index].style.borderColor = 'lightgreen'
}
})
}
}
document.querySelector('.btn-1').onclick = function () {
clickEvents(0, 'https://localhost:1001/option? size=' + reqSize + '&delay=' + reqDelay, 'output-http1', 'http1.x')
}
document.querySelector('.btn-2').onclick = function () {
clickEvents(1, 'https://localhost:1002/option? size=' + reqSize + '&delay=' + reqDelay, 'output-http2', 'http2')
}
</script>
</body>
</html>
2. server code (http1.x is only different from http2)
Const http = require('https') // if http2, change' https' module to' spdy' module
const url = require('url')
const fs = require('fs')
const express = require('express')
const path = require('path')
const app = express()
const options = {
key: fs.readFileSync(`${__dirname}/server.key`),
cert: fs.readFileSync(`${__dirname}/server.crt`)
}
const allow = (res) => {
res.header("Access-Control-Allow-Origin", "*")
res.header("Access-Control-Allow-Headers", "X-Requested-With")
res.header("Access-Control-Allow-Methods","PUT,POST,GET,DELETE,OPTIONS")
}
app.set('views', path.join(__dirname, 'views'))
app.set('view engine', 'ejs')
app.use(express.static(path.join(__dirname, 'static')))
app.get('/option/?' , (req, res) => {
allow(res)
let size = req.query['size']
let delay = req.query['delay']
let buf = new Buffer(size * 1024 * 1024)
setTimeout(() => {
res.send(buf.toString('utf8'))
}, delay)
})
Http.createserver (options, app). listen (1001, (err) = > {//http 2 server port 1002
if (err) throw new Error(err)
console.log('Http1.x server listening on port 1001.')
})