What is the reasonable value for chunksize of mongodb sharding?

  mongodb, question

RT,

Heard people say:
In the case of 20M, 1 million pieces of data will be inserted for fragmentation, and data will be lost.
In the case of 63M, 60 million pieces of data are inserted for fragmentation, and data is lost.
Different mongodb versions also differ

Do you guys have any experience with this value?

“Lost Data” and “chunksize” are two unrelated things and have no direct logical connection. I don’t know who will pull these two things together and tell you. As I also don’t know what kind of specific scenes specifically referred to have lost data, so as far as I know, give some answers that may be useful to you.
It seems that you are concerned with the problem of losing data rather than chunksize at all. Moreover, normally there is no problem with chunksize keeping its default value, so I will skip the chunksize problem.
Make some explanations on “losing data”. For any database, it is intolerable to lose data without reason. So there was a loss of data, or

  1. There are irresistible factors, such as power failure, hardware damage, network failure, etc.

  2. Configuration reason

  3. There is a serious bug in the software.
    There is nothing you can do about 1 anyway, and this should be minimized through the replication function of ReplicaSet.

Second, if you do not open the journal (it is opened by default), you may lose data within 30ms in case of power failure or crash. If the data is very important and cannot tolerate a loss of 30ms, please open the j parameter:
mongodb://ip:port/db? replicaSet=rs&j=1
(The above parameters may also be specified by code according to the granularity of a single request. Please refer to the driver documentation you are using.)
This parameter ensures that data is blocked until journal writes to disk.
But do you think it is safe to drop the data? Remember that this is a distributed environment and the data security of a single machine does not represent a cluster. Therefore, in case of an accident, although the journal has dropped, it has not yet had time to copy to other nodes of replica, and thenprimaryAs soon as it is lost, other nodes will become new through election.primaryAt this time, there will be an interesting situation calledrollbackIf you are interested, you can read it. Of course, the speed of replication is usually very fast, and the occurrence of rollback is very rare. Well, you may still feel unsafe, then there is another W parameter that can be used:
mongodb://ip:port/db? replicaSet=rs&j=1&w=1
The w parameter ensures that write operations are blocked until data falls on multiple nodes (w=1/2/3…n).
Is this safe? Sorry, in a particularly unlucky situation (you should really buy lottery tickets), you copied the data to more than one node. What if this group of nodes fails at the same time? So we have w=majority (most). When the cluster loses most of its nodes, it will become read-only, so there will be no new data writing and no rollback. When everything is restored, your data is still there.
The above are some cases where data will be lost. It can be imagined that the configuration of W and J will affect the writing efficiency to a great extent while ensuring the data security. This should actually be a strategy that you customize according to your tolerance for data loss, not a bug.
Another thought is that people often meet in the community like to do such things:

kill -9 mongod

If I were to say it was too cruel, why did I shoot mosquitoes with artillery as soon as I got up? Data loss in this case can only be said to deserve it. In fact,

kill mongod

It is safe, but -9 is your fault.

As for the third point, mongodb did have a bug that caused data loss in the development process. 3.0.8-3.0.10 are the hardest hit areas, so avoid these versions. Then again, which software development process does not have any problems? 3.0.10 The problem was found on the day of 3.0.11, and the repair speed was fast enough.

Well, having said so much, I don’t know if it is useful to the subject. Still remind, describe the problem as clearly as possible, otherwise I can only guess what kind of problem you encountered in what kind of situation, the most likely situation is the old saying goes:

Garbage in, garbage out