Logic of the program:
The program generates a lottery code in advance and stores it in mysql. When a user comes, he finds an unused lottery code and sets the lottery code as already used
When a large number of users concurrently request, most users return the same lottery code.
The reason is probably this way
select codeid,codevalue from tb_code where isused=0 Limit 1
Find an unused lottery code through sql above
update tb_code set isused=1 where codeid=codeid
Then the sql statement above is updated to already used
When a large number of them are used, for example, A got a code XYBV first, but it hasn’t been updated, and when B user also came, he found this code value.
Code is returned after update
Later, the following update statement was modified
update tb_code set isused=1 where isused=0 and codeid=codeid
According to affectedRows, it is determined whether the update was successful. If it is successful, it will return code, and if it is not, it will return a null.
This will not return duplicate values, but some will not receive code values.
After searching, the code value was found by direct update, and finally the code value just updated was found by ranomno, as shown in sql below
Updatetb _ codesetisused = 1, randomno =' plus randomno plus where codeid in (select codeid from (select codeid from tb_code where isused=0 Limit 1) as arbitraryTableName)';
At the beginning of the local test, there was no problem, thinking of solving it. Later, when I went online and checked the log, I found that many requests had not responded for a long time, which should be due to the low efficiency of the sql statement above.
Finally, we changed the method to use redis to cache code.
First, take 1000 code from the library and use Lpush to put them into redis. then, when the user has a request, take Rpop directly from redis
Then do a timing task to detect the number of code in redis. if the number is less than the set number, take 1000 Lpush from the library to redis
Now the project has been running for several hours, and there is no problem of request timeout or duplicate code value for the time being.
However, I feel that this method can work, but it is not the best.
Do you have any best practices for similar problems, such as database design and program structure?
You can generate all the code in advance and put redis inside in it, so you don’t have to do 1,000 more.
You can also use mongodb, Query Changes database concurrency performance is quite strong;