The latest version is used:
A document has only 25 million pieces of data.
Add indexed fields:
It is still slower and slower to check the data in the following pages. And physical memory usage has skyrocketed
In particular, a $max method of aggregate function is used to find the maximum value of a certain field, which is estimated to be a full table scan.
For the first time, more than 20 million pieces of data jumped to about 10 gigabytes of memory.
In this way, more than 20 million people will take up so much. What about hundreds of millions?
Ask mongodb for an optimization solution?
Mongodb’s WT engine uses half of the memory as cache by default. See if your memory consumption exceeds 50%. No more than is within the normal range. If you want to limit the memory size, configure the cacheSizeGB parameter:https://docs.mongodb.com/manu …