Insert data into a data table, insert multiple pieces of data, some very fast, about 0-1 millisecond, some as long as about 10 seconds.
At present, I can’t think of any specific reason and have no clue.
Ariyun this level of operation, not so fast to the bottleneck.
My data table, about 10 million pieces of data, is frequently read, written and deleted. They are all lightweight temporary data, and it is better to consider whether to use redis to store them.
About 10 servers connect and operate the database at the same time.
Finally, redis storage was used instead. There was no further delay.
However, the storage space of redis is more precious. It is a problem whether to exchange speed for space or space for speed.
After a week of tossing and turning, the problem was finally found out. In fact, the reason for the performance decline was that a certain query statement accidentally found 10,000 matching records, which added up to several hundred MB. After many queries, you can imagine.
After another day or two, redis was full and had to be replaced. Redis memory is expensive, it is better not to put a lot of things easily.