MongoDB: Error reported when inserting data, terminate called in shell ()

  mongodb, question

It is processing a. js file, which contains a large number of insert commands and update commands. One command takes up one line.
The file size is 222M m.
The error message is as follows:
错误截图

I guess it may be because a single command is too long, but using mongo to directly process. js files should not have such problems, right

The system is debian 32-bit, version 2.6.32-5-386

In the stackoverflow and segmentfault search, I only saw that someone met the stack information that starts with _ZN5.

It runs successfully on 64-bit mongodb.
However, there was another error when 64-bit loading 2.3G files. fortunately, there was clear information this time.
It can be processed by disassembling it into more than 200 m files.

It’s strange that I didn’t see any such restriction in mongodb documents.

64位错误

connecting to: localhost/test
 tcmalloc: large alloc 2364833792 bytes == 0x2658000 @
 2014-08-19T21:49:28.069-0400 Assertion: 16569:In File::read(), ::pread for 'univ100-100.js' read 2147479552 bytes while trying to read 2364826029 bytes starting at offset 0, truncated file?
 2014-08-19T21:49:28.164-0400 0x864f81 0x813579 0x7f6f86 0x7f74dc 0x80a61f 0x79781d 0x61f12a 0x621f63 0x3ff161ecdd 0x61a049
 mongo(_ZN5mongo15printStackTraceERSo+0x21) [0x864f81]
 mongo(_ZN5mongo10logContextEPKc+0x159) [0x813579]
 mongo(_ZN5mongo11msgassertedEiPKc+0xe6) [0x7f6f86]
 mongo() [0x7f74dc]
 mongo(_ZN5mongo4File4readEmPcj+0x30f) [0x80a61f]
 mongo(_ZN5mongo5Scope8execFileERKSsbbi+0x61d) [0x79781d]
 mongo(_Z5_mainiPPcS0_+0x52a) [0x61f12a]
 mongo(main+0x33) [0x621f63]
 /lib64/libc.so.6(__libc_start_main+0xfd) [0x3ff161ecdd]
 mongo(__gxx_personality_v0+0x469) [0x61a049]
 exception: In File::read(), ::pread for 'univ100-100.js' read 2147479552 bytes while trying to read 2364826029 bytes starting at offset 0, truncated file?
 
 real    0m7.922s
 user    0m0.045s
 sys     0m2.477s

Thank you very much for your report.

First of all, stack trace was passed by gcc mangle and can be useddemanglerThe other way around.
Suppose your server is 2.6.3 like shell, the problem lies directly inHereThe shell reads the entire script file into memory and compiles it. Any problem with the 2.3G file on a 32-bit machine can be explained. But there will be problems on 64-bit computers, so it shouldn’t be.

There are two problems here.
1. JS file size should be limited to 2G to be compatible with 32-bit operating system.Current logicRight, butUpper limitIt’s too big to match the description.
2. When reading files, the system cannot always guarantee oncepreadCall finished reading, so infile.cppIt is better to read it many times until it is finished or wrong.

I hope I have made the problem clear. The temporary solution is to divide the documents into smaller pieces as you did. If you canJiraIt is better to submit this question, and it is better to simply describe the problem you encounter in English. Kernel engineers will improve in future development, and you can also track this issue on Jira. If it is not convenient for you, I can help you submit it.

A better way is to arrive after submitting the report on JiraGithubOn the fork MongoDB code, modify and then submit the Pull Request, code reivew by Kernel engineers, and finally merge into the code base. Because this problem is relatively rare, relatively independent, and relatively clear, it is very suitable for submitting Pull Request after completing a small task. Then MongoDB users all over the world are running the code you wrote. I think since MongoDB has so many Chinese users, we have the ability to contribute to MongoDB community. If you have any questions, please let me know.

P.s. I am curious why you need a 2.3G script. if you can tell me what you want to do, there may be a more elegant solution.