Practice of API Automation Test for Yixin Payment and Settlement System

  api, Automated testing

Basic steps for API testing

Generally speaking, the basic steps of API testing mainly include the following three steps:

1. Prepare test data;

2. Initiate a request; for the tested API through a common or self-developed API testing tool;

3. Verify the response of the returned result.

Common API testing tools include command line tool cURL, graphical interface tool Postman or SoapUI, JMeter supporting API performance testing, etc.

Examples of API Complex Scenes

By using basic testing tools, you can do API testing for simple scenarios. In the process of the project, in order to solve some practical problems, we will design more complex test scenarios. Here are some typical scenarios in the actual project.

Scenario 1: API call in series

Taking agreement payment as an example, we know that after a three-party company connects to the Internet, it replaces withholding with agreement payment. In the process of agreement payment, the user needs to enter the verification code returned by the bank to complete the card binding. From the interface level, the order is to call the protocol signing API first. After the return status is successful and the SMS verification code is obtained, the withholding API is called using the SMS verification code as an input parameter. The protocol signing and withholding API are called sequentially, and the SMS verification code on the mobile phone is obtained in the middle of the two calls. These processes need to be realized through program automation to improve efficiency.

Scenario 2: API Interface Encryption

In order to ensure the safety of API interface, mutual access between systems and modules within the system requires encryption. Common encryption methods include DES, AES, RSA, MD5, etc. The encryption methods of each system are different (the interface caller and the interface provider can agree well), which means that API testing needs to support a variety of automatic encryption methods. Some systems also return encrypted response messages, which also need to be identified and decrypted.

Scenario 3: Asynchronous API Test

Asynchronous API refers to receiving a synchronous response after the request is issued, but it is not the final processing result. The final result is obtained through callback or active query. For such API, the verification of synchronous response is only the first step, and the values in DB, MQ and the success of asynchronous callback will continue to be verified. For asynchronous callback, we can simulate the callback address to verify the success. For active queries, we have to check the status value in the DB to verify it, but the time point when the results are queried is uncertain and it is possible for several minutes to several hours, which requires a regular DB query task to verify it.

Scenario 4: External Dependencies in API Testing

APIA calls APIB and b is unavailable, so how to test APIA needs to be considered. For example, the payment system relies on three-party payment channels and banks. Not all three parties support the test environment. The core idea to solve this problem is to build a MockServer and try to achieve universality. We have developed a set of Mock system -aMock. Interface information is entered through the page and stored in the database. The configured Mock interface is accessed through Nginx. The request information is processed in the background in a unified way, and then specific response information is matched through URL and message characteristics.

API test platform

Our API testing platform should be based on business scenarios, that is, to support the common needs of various businesses and to enrich API testing capability by developing it according to the individual characteristics of different businesses. In addition, the use cases should be well planned and the results clearly displayed. The test platform architecture is as follows:



TestCaseManagement:Test case management includes the establishment and maintenance of data relationships from test cases to test case sets and then to test tasks. The test case is the smallest unit. The test case set is the collection of test cases from a certain dimension. The test task is the test execution, which can be triggered immediately or executed at regular intervals. Only the test case set can be executed.

Util:Tool class encapsulation mainly provides data encryption and decryption, data type conversion, configuration file reading and writing, data dictionary caching services, etc.

Validator:Validation encapsulation of interface response fields and database fields.

RiskManagement:Wind control processing, because it will involve the payment of real funds, requires built-in wind control rules to ensure the safety of funds and controllable risks.

Timer:Scheduled task services, including

1) in series API use cases, the state judgment of the preceding use cases;

2) Database verification of asynchronous API;

3) judging the failure state of the timeout API use case;

4) Scheduled task plan.

MockServer:Mock service of external system that use cases depend on.

Portal:The API test platform portal website, including test case entry, maintenance, test task execution, result viewing, export, etc. are all operated through the portal.

DB:Stores test case data, corresponding test tasks, test report data, and project configuration, etc.

At present, more than 1200 maintenance cases are summarized for each project on the API test platform, with regression cases as the main ones, which are still increasing. With the continuous addition of use cases, the platform has gone through a series of optimization. Here are some thoughts on this process.

Test data preparation

For the execution of a large number of API use cases, black-box testing is required, that is to say, test data and test code can be separated and decoupled, thus facilitating the maintenance of test data and ensuring the accuracy of data. The format of use case design is as follows




Several key data nodes are provided by DataProvider. In order to increase test coverage, there are many similar test data in the database, such as many four-element data (bank card number, mobile phone number, ID card number, name). When a large number of use cases need to be read, they can be stored and read into cList in a cache way. Through cyclic traversal, each test data can be read uniformly. The following is a code to replace the key data nodes and assign cList data to corresponding variables in turn.


Logic Control of Test Execution

In many cases, the test is a scenario API test, involving the sequential invocation of use cases. As shown in the following figure, “Signing-Success -kftn- Agreement” depends on the implementation of “Signing-Success -kftn SMS”; When adding use cases, configure the relationship.

During execution, the two types will be divided into two categories according to the use case attributes, and stored in two list respectively. Use cases without preconditions can be executed immediately. Use cases with preconditions will set TestStatus to 0 and wait for scheduled task polling to trigger execution. The classification execution code is as follows

The scheduled task is executed once every minute. The following paragraph is to judge the execution status of the pre-API. Only “0000” indicates success, and the current API can be executed. During execution, the result data of the pre-use case needs to be read and passed in. If the pre-API fails, the execution of the task will be stopped. The same is true for the sequential execution of multiple API use cases. Even if there are externally dependent use cases, such as SMS verification codes, we can automatically upload SMS verification codes to the server by writing a mobile phone APP program, and then trigger delayed acquisition of verification codes. After obtaining the verification codes, we can record the status and results of use case execution through DB and send them to the next API for use, thus completing the sequential execution of multi-use cases. In addition, the execution of test tasks is encapsulated into restful interface, which can be more flexibly combined with CICD system currently being developed by the team.


Verification of Test Results

By analyzing the business, the result check of API can be roughly divided into two types: response check and database check. Response verification is the verification of response message fields, which can be accurately matched or fuzzy matched through regular expressions. Database verification is based on scheduled tasks. The verification method needs to be set in the use case according to the agreed format. For example, the following sql verification conditions will query the return field in the quasi-production environment by referring to the order number and other conditions, and determine whether the status is 7, thus determining whether the use case is successful.


bs_outpay.trans_batchtbleft joinbs_outpay.es_business_sendbsontb.business_batch_no=bs.entity_uuidandbs.entity_status<>2 WHEREtb.outer_batch_noin (?) order bytb.CREATED_TIMEDESC|,|{"status":"7"}

Use case states are divided into four states: success, failure, in-process and timeout. They are mapped by configuring corresponding SQL query conditions. Success and failure are final states. In-process tasks need to continue to be queried regularly. Timeout is a state set internally. Currently, the final state that has not been returned for more than one hour is set as timeout. This API use case fails and alarms. Manual participation is required. All these rules were added when the use case was established and edited. As shown in the following figure, one use case includes response check (value check, key check) and database check. Through this relatively flexible design, it can basically meet the requirements of complex API test scenarios. It should be pointed out that many applications do not allow external test platforms to directly access the database. Our solution is to write a data query service deployed in the system environment, which only provides query function and has encryption verification to ensure the credibility of both communication parties (between the test platform and the data query service).

Generally speaking, the test platform or framework can be understood as a series of tool chains from a certain level. In the process of developing this platform, we use technology stack with springmvc+herbinate for logic control, amazingUI for use case management, echart for result display, Jenkins for task scheduling, etc. Users are testers of all business lines. They do not need to know the implementation of specific codes, but they need to have a good understanding of the system structure and use case rules in order to design use cases that meet the test scenario.

The design of any testing platform should be business-based. Our subsequent promotion strategy for API platform is to continue to add scenario-based functions to support testing of more business types, such as day-to-day and day-to-day batch running tasks in the clearing and settlement system, data verification of reconciliation files, etc., to increase large concurrency and combine with performance testing tools.


Author: Sun Ying

Yixin Institute of Technology