LEVER: A beginner of PHP and MYSQL
Data volume: about 300,000, not more than 500,000.
It is such a website and I am of such a level that I have been killed by a difficult problem in the past few days.
Because this station before me was built with asp and mssql, and it has been running well on the hosted server. One day I wanted to put the database on Alibaba Cloud, checked the quotation, and found that mysql was nearly 10 times cheaper than mssql. Only then did I realize why open source software is so popular. Well, I also changed the database, and by the way, I learned PHP that I wanted to learn a long time ago. Just do it and install mysql in PHP. The suffering encountered in the middle was not mentioned. The rewriting was smooth. Some mentally retarded questions were warmly answered by netizens. It was quite smooth. But until yesterday, when I was rewriting the original full-text index statement in MSsql as PHP full-text index statement, I accidentally saw that MSSQL did not support Chinese full-text index. So I felt extremely frustrated. What should I do? Such a good database has such a difficult problem, so I think for such a mature database, and full-text indexing is a common function, then online solutions must be ready-made, well, I started Baidu, only to find that it is difficult to find decent information to easily solve this problem, and many solutions are so obscure. For people of my level, it is not easy to read, let alone to carry out, and I tried one of the Sphinx and CoreSeek4.1 schemes. I couldn’t even get through the installation and debugging, and I found very little information, alas. . . Sad
So, I just want to ask you, why don’t you all need to solve the problem of establishing and using full-text indexes in the process of Chinese fuzzy logic? Isn’t this a very common application?
Or, please help me analyze, for such a database of 300,000 or so, with a total of 10 fields, would you like to realize fuzzy logic for 5 of them, is it faster to establish a full-text index of Chinese through spnix/coreseek, or is it faster to directly use the percentage of like Chinese keywords? Or how much slower can it be?
I don’t have any experience in this field now, so I have been struggling. If there is no significant difference between the two, I think it is too much to build a spnix/coreseek environment for this. But also so so difficult to build, really upset, don’t know what to do, please help me? Thank you all.
Mysql5.7 Supports Chinese Word Segmentation Full Text Index