Recently, the company’s node project is about to go online, so we need to do a stress test.
The architecture adopts express, and the tested page 127.0.0.1/test returns a json
I ran under the local mac, and the rps of the -n 10000/20000/50000 -c 100 pressure test is basically around 2000. although I don’t know if this result is qualified, I still feel pretty fast.
I also see a lot of articles about node’s pressure measurement on Baidu. The pressure measurement results of others are always tens of thousands. I really don’t know what’s going on here. I hope to find out! ! ! ! ! ! !
In addition, it is found during the test that it is occasionally unstable, and overtime will occur at the time of the last two rounds of pressure measurement.
This is also the first question.
1. Is it my node’s own problem or A/B Testing’s problem when overtime occurs? How should node be optimized? If it is ab’s problem, what pressure measuring tool is stable and close to reality
I put the program on Alibaba Cloud. The server in Alibaba Cloud is at least 4 stone 8G, but the result of the local pressure test of the server is not as good as that of my computer. It is also the test method. RPS is basically about 1100, which is nearly half the difference. Later, I installed pm2 on the server and started two processes with pm2
However, the test results have not changed at all, and I feel that the Central Processor and memory of the two processes have not changed during the pressure test. I’m totally overwhelmed. …………..
2. pm2 said that it can increase the load capacity through load balancing, but the pressure measurement has not improved. Can the great gods give some advice?
Finally, I used another machine in the local environment to access my local service through a local area network, and the result of the pressure test was quite unexpected.
Or the same pressure measurement method, the result RPS is only more than 120, completely stupid …………..
Great gods, can you please take a look at it for a moment?
I think your test results are very real.
For example, if you test on a Mac, RPS is 2000, assuming that the number of processor cores in your Mac is 4, then the RPS of each processor is 2000/4=500, i.e. one processor core processes 500 requests per second, then the time taken for each request is 1/500=0.002 seconds/request =2 milliseconds/request. However, processing a request every 2 milliseconds is still very fast. If you don’t believe that you can open your browser to see the Websites that you think are fast, and look at the response time of their main requests, you will know that it is not slow to process a request in 2 milliseconds. The response time of reading requests can also be within 5 milliseconds for web programs written in native PHP, and the environment is Ubuntu i5-3230M PHP (open opcache).
You can also refer to the data of large websites such as Stack Overflow:
It can be seen that the number of requests RPS processed per second is only 2424.
http://www.infoq.com/cn/news/2016/03/Stack-Overflow-architecture-insi Data of February 9 in 2016: HTTP requests 209,420,973 (requests processed per second RPS = 209420973 ÷ 24x3600) = 2424) Number of page loads 66,294,789 (PV processed per second = 66294789 ÷ 24× 3600) = 767) HTTP traffic was sent with 1,240,266,346,053 bytes (1.24 TB) The total data received is 569,449,470,023 bytes (569 GB) The total amount of data transmitted is 3,084,303,599,266 bytes (3.08 TB) SQL Queries (HTTP Requests) 504,816,843 Redis hits 5,831,683,114 Elastic Number of Queries 17,158,874 Tag Engine Requests 3,661,134 SQL query takes 607,073,066 milliseconds (168 hours) Redis hit took 10,396,073 milliseconds (2.8 hours)
Those who are prone to C10K are really unnecessary.