At present, the log data accounts for about 70% of the storage, so we have to expand the storage capacity of the database. With mongodb storage, there are currently hundreds of millions of data.
Write more, read less.
I checked the slow log record and now write about 300 milliseconds. Both are insert operations of log tables.
At present, although there is no big problem for the time being. But there will be hundreds of millions and billions of articles in the future.
Do you need a separate database to store logs? Or other storage schemes?
Look at the scene:
- Do logs need real-time analysis?
If real-time analysis is not required, it can be stored in file form and fixed format, and then analyzed offline.
- Do you always need all logs?
If you don’t need all the logs but only some of them, you can delete the logs before a certain time.
Now the general practice is that the recent logs are stored in mongodb, and the long-term logs are stored in big data platform.