1. An ajax request, then a read database all returned to the front-end cache, and then paging.
2. Once reading the database, the server caches it and then makes multiple ajax requests to request the required number of pages.
3. Repeated ajax requests, reading the database again every time and reading the data of the required pages.
I used the second of the above three types. The first one was definitely not good. All of them were used at one time, and the others might not be used at all. And whether it is the second good or not and why. Is there any other way? Is the third method okay?
This is usually done in actual projects, such as a user management interface, not only paging, but also fuzzy queries such as user name and age.
At this time, clicking on paging usually requires executing an ajax request again, and filtering conditions (such as paging index and fuzzy query conditions) instead of using a cache such as redis.
Redis cache is generally used for things with a large amount of data, such as tens of thousands of pieces of data at a time, and only 1,000 pieces of data. In actual projects, the database is usually read directly, such as mongoDB database, and limit is sufficient.
Don’t think about performance, the actual project has a large amount of data, and when you meet it, find a way to solve it.