If all the data is loaded into memory, will count become faster?

  mongodb, question

Problem

Looking at mongodb’s slow query log, there are manycountThe longest slow query is half a minute.

Solution

Don’t want to add a lot of indexes (e.g. article title, article platform, account number, emotion, etc.). if you simply add memory (e.g. from 32GB ==> 128GB), can you speed up count’s query performance? That is, if all the data is loaded into memory, will count be faster (even if there is no index)?

Some current indicators:

Current data volume: 20GB
 Server Memory Size: 32GB
 db.serverStatus().wiredTiger.cache
 "bytes currently in the cache" : 8458332579
 "maximum bytes configured" : 11811160064
 
 db.serverStatus().mem
 {
 "bits" : 64,
 "resident" : 11327,
 "virtual" : 13123
 }

The simple answer: No.
The role of memory in the database cannot be ignored, but you seem to exaggerate its role. Consider this:
Suppose you have 100w pieces of data, you need to count them according to certain conditions. If there is no index, you need to make 100w comparisons to see which of them meet the conditions and then count them. In order to make this comparison, the data must first be fished out of the disk. Expanding memory is indeed helpful in this respect, but then? Does it take a lot of time to compare 100w pieces of data? Will CPU also go up very high? This is the time complexity of O(n).
If you canCompleteHit the index, this process will be greatly simplified, because the search for data can be imagined as a binary search, its complexity is O(log2(n)). The time consumption of both is roughly as follows (abscissa data quantity, ordinate time):
https://i.stack.imgur.com/7eh …
clipboard.png

In short, under the same hardware conditions:

  • Without index support, more data takes longer
  • In theory, the increase in data volume will not cause a significant increase in consumed time when the index is hit completely.