If there is a large amount of data in MongoDB, skip and limit are not recommended. How can other methods be implemented?

  mongodb, question
  1. For example, when I attended the MongoDB user group exchange meeting two days ago, I heard the lecturer say not to use skip and limit if there is a large amount of data.
    This is because the number of PageSize will be counted line by line to the number of pages in mind. Of course, they also suggested another method, but only made a brief remark.

    I’m asking questions here to know exactly how to do it. Seek ideas.

If you want to take “next page” or “previous page”, you can do this by querying sort+limit that is greater than a certain _id.
If you want to take “page xxx”, if you want to be completely accurate, there is actually no good way. Paging itself is the logic of “counting by number”. Whether there is an index or not cannot avoid this time-consuming process.
When the number of pages is very large, not many people care about the number after tens of millions. You can use redis to cache the _id list of pages and update it every once in a while.
The fundamental point of this problem is: “the b+ structure on which the index depends cannot be used for ranking calculation.”