How can mongodb tables with tens of millions of data improve query efficiency?

  mongodb, question

For example, I have a message table that stores messages. There are 1 million data stored every month, and there are tens of millions of messages in a year.
Moreover, it is often necessary to query the data in the message table.
Now the more data there is, the slower it will be and the server CPU will be too high.
How should I optimize? Is it necessary to divide the list?

You should know that the time complexity of index is O(log2(n)), and this curve will not increase significantly with the increase of data volume. This means that if your query efficiency deteriorates with the increase of data volume, it is mostly because the query fails to hit the index or hits the index that is not efficient. If this is the case, check the index is what you need to do, slow queries will be output to the log file, take a look.
The above is the theoretical situation. In practice, if your index has been optimized to the point where there is no optimization, will there be no efficiency problem? In the real world, this will also be limited by memory. Although the efficiency of the index itself will not deteriorate significantly, the index that the memory can hold is limited. As the amount of data increases, the memory will not be able to hold the index. At this time, memory and disk swap (Page fault) is required in the query process. Depending on the degree of insufficient memory, it will also affect the efficiency.
On the whole, we usually encounter the first situation. Although there are many cases in the second case, its impact on efficiency is far less than that of the first.