On Performance Testing

  test

Part 1: performance testing

Performance testing is to test various performance indexes of the system by simulating various normal, peak and abnormal load conditions through automated testing tools.

A. categories

Performance tests include load tests, stress tests, benchmark tests, etc.

I. load testing

Through testing the performance of the system under the condition of resource overload, the design errors can be found or the load capacity of the system can be verified.

Ii. stress testing

Also called strength test and load test. Stress testing is to simulate the actual application of the software and hardware environment and the user’s system load in the process of using it, and to run the testing software for a long time or over-large load to test the performance, reliability and stability of the tested system.

Iii. Benchmarking

Part 2: Purpose of Performance Test

Verify whether the software system can reach the performance index proposed by the user, discover the performance bottleneck in the software system, optimize the code, and finally achieve the goal of optimizing the system.

I. system tuning

Ii. Identify weaknesses in the system

Iii. Evaluation of system capabilities

Iv. Verify the stability and reliability of the system

Part 3: Performance Test Process

Set performance test targets, select performance test tools, design performance tests, execute performance test scripts, monitor and analyze systems, and optimize performance

A. objectives

For example, 3,000 users are online, 240 users are accessing simultaneously, the response time for accessing is less than 2 seconds, and the utilization rate of system resources is less than 30%

B. tools

Major testing tools such as LR, JMeter and Locust can be selected. This article mainly introduces the correlation between LR and JMeter.

C. design

Test script development, load generation rules, scenario design and monitoring methods, test environment construction

D. implementation

Carry out benchmark tests, load tests, stress tests, etc. as required to collect results

E. monitoring

Monitor the operation of each node

F. analysis

Analyzing the data requires the cooperation of many people to cover the problems behind the data and determine the performance bottleneck.

G. tuning

After confirmation, perform software and hardware tuning, and then repeat the previous steps to find the most appropriate optimization scheme

H. performance indicators

I. response time

For page operation, the user’s sensory satisfaction response time is < 2s, and the response time is acceptable for 2~5s; if the response time is > 5s, the user will not be able to accept it.

The interface response time for internal calls needs to be faster, specifically related to the type of interface.

Ii. Throughput

Generally, it depends on the business requirements.

Iii. server resource occupancy

  • CPU occupancy rate
  • Memory usage
  • Cache hit rate

Part 4: lr

HPLoadRunner is a load test tool to predict system behavior and performance. Identify and find problems by simulating tens of millions of users to implement concurrent load and real-time performance monitoring

A. easy creation of virtual users

I. using LoadRunner’s Virtual User Generator, you can easily create a system load. The engine can generate virtual users and simulate the business operation behavior of real users in the way of virtual users.

B. creating real loads

After i. Virtual users is established, you need to set your load plan, business process combination and number of virtual users. With LoadRunner’s Controller, you can quickly organize multi-user testing programs.

C. recording scripts

I. parameterization

Parameterization can make scripts better adapt to environmental changes and improve the adaptability of scripts.

When the scene is running, each user uses different parameters to improve the authenticity.

Ii. association and session

For scripts, most of the time, when we process data, we need to analyze the data returned by the server, and if the data returned by the server changes every time, we need to dynamically obtain it every time, and we need to correlate it at this time. In short, it is to process the dynamic data returned by the server.

Iii. creating assembly points

The set point is to let Vuser set up and then do some operation at the same time, as long as a meaningful set point lr_rendezvous is set before the corresponding request.

D. implementation monitoring

I after starting the performance test, the system will generate pressure according to the set scenario. In the process of execution, it is necessary to observe the execution of scripts and the performance index of the tested system. LR monitoring to view this information

E. analytical statements

I when a performance test is completed, various performance analysis reports will be created, including cpu correlation, throughput, concurrency, etc.

Part 5:JmETER

A. introduction to jmeter

Apache jmeter is a 100% pure java desktop application for stress testing and performance measurement. It was originally designed for Web application testing but was later extended to other testing areas. Apache jmeter can be used to test the performance of static and dynamic resources (files, Servlet, Perl scripts, java objects, databases and queries, FTP servers, etc.). It can be used to simulate heavy loads on servers, networks or objects to test their strength or analyze the overall performance under different pressure types.

Advantages: Open source, lightweight, installation-free, cross-platform. Can support secondary development and expansion.

Disadvantages: The overall performance process is missing, and the report presentation is not friendly enough (gradually made up by plug-ins).

B. jmeter performance testing principle

Jmeter script operation principle: jmeter core code encapsulates a variety of page request types and API request types, provides GUI pages to populate parameters, and generates executable xml script files. jmeter can parse the script files and call corresponding protocols. This method is convenient for script writing, easy to use, low in code requirements, only need to master the relevant parameters of the corresponding request protocol, and convenient for debugging. Pluggable samplers support unlimited expansion of testing capabilities. For new protocols that are not supported, they can be expanded through secondary development.

Jmeter performance operation principle: Jmeter performance principle is similar to loadRunner. Multi-thread is used to simulate multi-users, and the complex demands of real users are met by controlling the starting and running of threads. Configure (expandable) a variety of timers, timers. To realize the access of users at a certain moment in time, i.e. the concept of aggregation point, or to simulate the waiting of user operation time by adopting a waiting time timer. The performance test method closer to the actual access situation is realized, so that the performance result data is more real.

Jmeter data acquisition principle: jmeter provides thread monitoring, and can process the running results, such as transferring to gui for data display and curve generation. The log file generated by recording can be used for non-GUI running, providing performance data analysis after running. Can be sent to the time series database in real time to provide monitoring and use, and can be dynamically monitored and viewed.

C. jmeter performance test steps (premise: jmeter is used as test tool for selection)

Analysis and determination of performance requirements: firstly, determine the objectives and requirements of this performance test, collect performance requirements parameters, and determine the test environment of the performance test and the judging conditions for passing or not. Determine and test various scenarios.

Script analysis, writing and debugging: script writing shall be carried out according to various scenarios determined in the early stage, request shall be set, timer shall simulate assembly point, waiting time shall be set to run script with single thread, whether the script is executed as expected shall be tested, and debugging shall enable the script to execute as expected.

Set up a good test environment: Set up a test environment that meets the requirements according to the requirements, and do a good job in advance of monitoring the running status of the server (cpu, memory, network, DB, etc.) to ensure that the tested system of the test environment has correct configuration of operating parameters and correct operation.

Perform performance tests and run performance scripts: determine the number of users who send requests, and whether the press performing the tests can support sending these concurrent numbers (determined according to the cpu and memory of the press). If not, the distributed press can be used for pressure testing, and ensure that the testing machine and the testing environment network are interconnected and sufficient to support the pressure testing operation. After the confirmation is completed, the script running parameters can be determined according to the requirements, and the performance script execution can be carried out in the NO-GUI mode.

Analyze the performance test data: collect the performance test data after the operation is completed and analyze the test data, which can be imported into the listener in jmeter for data processing, or use other processing methods to verify whether the requirements are met and whether the performance test passes.

D. jmeter-based performance testing platform

Ideas: Unified management of jmeter version, maintenance of performance requirements, unified management of performance scripts and result data, increased monitoring of server operation, one-stop solution from requirement formulation to result analysis on the test platform, simplified performance testing, and changed performance testing into visual controllable and manageable state.

Part 6: summary

1. What is the performance test?

Performance testing is to test various performance indexes of the system by simulating various normal, peak and abnormal load conditions through automated testing tools. Both load test and pressure test are performance tests. Load test is mainly through the performance of the system under various workloads. As the load changes, the system performance changes are determined. Stress testing is mainly to determine the limits that the system can bear. The Internet financial customer base is quite large, so the performance of the system is particularly important to us.

2. What is the performance test?

Performance testing simulates various controllable and uncontrollable requests by using various tools. A variety of performance test points are formed by simulating the operation of production and the combination of usage scenarios to test whether the performance of the system can meet the production performance requirements. Run the verification system under specific conditions to verify its bearing capacity. Judging whether the system meets the production requirements through various performance indexes, and making risk assessment on the system in time. Find problems, solve problems, and feedback users’ high-quality experience.

3. How to do the performance test?

The performance test of our system uses test tools on the market for performance test. This test method can capture some problems, but it is difficult to simulate the combination of business scenarios. Moreover, there are various testing tools, which are not conducive to collecting test performance results. The performance test development that we are currently conducting takes Jmeter, a unified performance test tool, as the core. The influxDB database is used for data collection, and Grafana is used as the performance data display platform. Build a complete performance test platform, open a unified performance test channel, and establish a common performance index collection platform. The platform establishes performance test plans according to our own business requirements and introduces different business templates. Reasonable performance tests can be carried out through the platform interface. We can display different performance graphs according to different businesses. It is helpful for testers to locate the problem in time.

Source:Yixin Institute of Technology