Practice of Agile AI | NLP Technology in Yixin Business [Intelligent Chat Robot]

  Artificial intelligence, Big data

Write in front: in the backgroundPractice of Agile AI | NLP Technology in Yixin Business [Background]We have a general understanding of the development of NLP technology. Next, we will introduce the advanced scenarios of NLP technology application in Yixin. This is the intelligent chat robot in the scene, please watch ~

Author: Jing Yuxin.He graduated from eecs with a doctor’s degree. His research interests include computer software and theory, logical reasoning, etc. He currently works in Yixin Technology Research and Development Center and is engaged in research on artificial intelligence, machine learning, natural language processing and knowledge engineering.

# # Intelligent Chat Robot for Advanced Scenes

We have introduced the evolution and development of NLP technology, data and services. Next, I will share some implementation experiences in NLP field with two specific examples. Today, we will introduce how to use NLP technology and intelligent chat robots to solve a large number of daily business consulting problems faced by the organization.

1.jpg

Figure 1

For modern enterprises, intelligent chat robots have a very wide range of business needs. Externally, we have common customer service robots, intelligent investment consultants, etc. Internally, we have business support robots, operation and maintenance robots and personal assistants.

This example is a credit business consulting robot for enterprises, which is a QA-BOT. Its business background is as follows: At present, Yixin’s Universal Services has 500+ offline stores nationwide, including 600+ sales department heads, 3,000+business specialists and 20,000+front-line sales.

Every day these front-line colleagues will have a large number of business consulting problems in their business work. In the past, these problems were handled manually in IM by back-office support colleagues. The work was very boring, with high processing cost and low efficiency. It is impossible to carry out effective statistics on problems and do not know the frequency of asking questions, which leads to the lack of targeted training. In the long run, it is not conducive to business development or team development.

In order to solve this dilemma, we have developed a QA-based Q&A robot to support this work, turning the manual process into automatic processing, thus realizing an all-weather, 7X24-hour comprehensive support mechanism.

For a question-answering robot, the core and essence of its task is actually a question-answering model based on retrieval. We semi-formally define it as follows:

Enter a user’s question Qx. In the existing QA database, namely (Q1, A1), (Q2, A2), …, (Qn, An) and other QA question-and-answer pairs, find such a set of question-and-answer pairs (Qk, Ak) so as to maximize the value of functions F(R(Qx), R(Qk)), where F is the semantic similarity function and R is the text representation function.

The above definition means that we hope to find the question that is most similar to the user’s question in all QA question-and-answer pairs, and its corresponding answer is the most appropriate one to feed back to the user.

The core problem is to find the semantic similarity of texts, that is, to find the similarity between the texts of two problems. There are many ways to solve this problem, for example, we can directly construct a neural network of Dual LSTM, input the user’s query from one side, and then input the question of the question-and-answer pair in the knowledge base from the other side. under the condition of sufficient corpus, we can train a model through RNN, CNN or fully linked network, and the output probability value is the similarity of the two input problems, as shown in fig. 2.

2.jpg

Figure 2

However, in most cases we are faced with the problem of insufficient samples, especially in the fast iterative research and development environment, we usually cannot collect enough corpus. Therefore, we often divide the similarity problem into two sub-problems, namely semantic representation of short texts and semantic distance calculation. The former is more important between the two. Once the problem we are going to deal with has a reasonable semantic representation, we can calculate the semantic distance or semantic similarity between the two representations through simple cosine distance, full link network, etc.

How can we accurately represent the meaning of the short text?

There are also many methods, such as classical word bag model, and some unsupervised representations (word vector weighting, Doc2Vec, Skip-thought, Variational Auto-encoder) and supervised representations (DSSM, transfer learning), etc.

However, we should pay attention to the restrictions mentioned earlier, that is, it can only be based on small-scale corpus and currently limited QA question-and-answer pairs, and the business requires fast implementation and fast iteration. Therefore, in the early implementation stage of the scheme, we preferred the method of “word bag model+synonym extension +tf-idf weight”. Using the synonyms of related terms and common words accumulated before, we can retell a question based on synonyms, thus exploring many different methods to improve the probability of users’ questions hitting the database in limited QA. After the representation vector of the short text is constructed by the above method, the score of semantic similarity between the two texts can be obtained by using some semantic similarity calculation methods.

The biggest advantage of this method is fast, we can use small-scale corpus to quickly launch a version of Q&A robot in a new field within 1-2 weeks with very good results.

640.jpg

Figure 3

Of course, this plan is definitely not the end point. The most important function of the above plan is to quickly launch the initial version of the model. Using this model, we can collect the real problems raised by users, scroll and accumulate more problem data, and continuously supplement the data into the corpus and QA database, thus providing a basis for us to train more complex models. Moreover, as the number of QA pairs in the QA database increases, we can answer more types of questions.

After having a certain corpus foundation, we have constructed a more complex neural network model. Here, we adopt the idea of the classic paper “universal language model fine-tuning for text classification” (by j Howard et al). we first train a language model on the general corpus, then fine-tune the model on the domain corpus, and finally transfer it to the final target task. in addition, the paper also provides some tuning and optimization techniques.

4.jpg

Figure 4

According to this idea, the project is implemented: after the language model is trained on Wiki corpus, the language model is tuned on domain corpus, and then migrated to the corresponding similarity calculation network. Finally, a good test result is obtained. In the returned answer list, the probability that the correct answer is in the first place is 88%, and the probability that the correct answer is in the first three places in the list is 94%. Overall, this effect is good.

Of course, for Q&A robots, QQ similarity calculation is only one of the more important steps, and many other models need to be integrated to improve the accuracy of Q&A through cooperation. For example, QA matching model is used to calculate the matching degree between user questions and the corresponding answers of all questions in the knowledge base. However, with the calculation basis of QQ similarity, we can use the same idea to build a QA matching model and output the QA matching value. Finally, the QQ similarity value and QA matching value are weighted and rearranged to obtain the final answer list, which is the answer returned to the user.

In addition, there is also an expansion direction, that is, the handling of the case where the user questions fail to hit QA data. The QA database is made up of QA question and answer pairs extracted manually or generated intelligently, and the number is limited, so the various questions that users may ask cannot be completely covered. An effective method of capability expansion is to develop the retrieval data source of QA robots (see fig. 5), and extend the scope of robot retrieval to some third-party API query interfaces, and find out the answers from QA questions and answers on the knowledge of various channels such as databases, knowledge maps and documents.

5.jpg

Figure 5

In our project, we implemented a guaranteed scheme of “document retrieval+key information extraction” to provide users with answers as much as possible when the question and answer database cannot cover all questions.

Of course, we can also search on the three-party API or knowledge map through slot value extraction, entity relationship identification and other methods to solve some problems. However, this method is slightly more complicated and generally requires session support. There are special articles and reports on this aspect, which will not be repeated here.

However, when talking about multiple rounds of conversation, we can also use this technology to solve another problem, that is, how to solve vague questions. In fact, some of the questions raised by users are very vague and cannot find accurate answers, which often leads to a decline in the effectiveness of the system. For example, the questions asked by users are very short, only two or three words, which obviously makes it difficult to retrieve an accurate answer in QA database.

6.jpg

Figure 6

Fig. 6 shows the division of robots in some materials. dialogue robots are divided into QA robots and conversation robots. QA robots retrieve structured and unstructured data. On the other hand, conversation robots usually need to communicate with users for many times on a problem under the support of multiple rounds of conversation, capture users’ intentions and give corresponding responses, such as chat robots, task robots and recommendation robots.

We believe that QA robots will gradually introduce the concept of conversation. For fuzzy questions raised by users, we can fully use methods such as dialogue state analysis, dialogue state management and key information identification to determine what the user’s intention is and what the missing information is. Then we can use text generation or additional questions to ask users to add more information. In this way, our robot can find more accurate results with sufficient information.

7.jpg

Fig. 7 main flow of robot processing

Fig. 7 is the processing flow of the robot, which is divided into four main links: preprocessing, analysis and classification, retrieval and matching, and comprehensive sorting. the technologies involved in each link are enumerated. before the report, we mainly introduced QQ retrieval, QA matching and other tasks.

In addition, we also provide platform-based management for the relatively advanced scene of chat robot (see fig. 8). its architecture is mainly to further package the scene on the underlying natural language processing platform. in addition, chat robot modules (including Web/APP integration, dialogue management, manual background, external API docking, etc.), QA library management module (including data management, implementation and publishing, etc.), knowledge base management module (batch import, content management, corpus generation, etc.) and very important statistical modules (including statistical mining and report display, etc.) are added.

8.jpg

Figure 8 Platform-based Advanced Scene Management

Through packaging and integrating various functions in the scene, we provide a one-stop solution in the form of a platform. Users can quickly build their own business question answering robot without perceiving the model under the condition of paying a small amount of data.

Figs. 9 to 11 are screenshots of some running effects of the robot. Fig. 9 is a web version of the robot interaction interface. it can be seen that the forms of robot answers include precise answers, similar questions, and document library search content.

9.jpg

Fig. 9 robot display effect

Fig. 10 is a session retrieval function in the background management interface, where the session between the robot and the system user can be easily browsed, the robot effect can be evaluated, and new problems found during the session and not recorded in the QA library can be identified and quickly and conveniently added to the QA library.

10.jpg

Figure 10 Background Management-Session Retrieval Page

Fig. 11 is a model management module, in which a list of various models involved in the robot can be seen, and each model is followed by relevant operation buttons, which can perform a series of control operations such as online, update, restart, stop, etc.

11.jpg

Figure 11 Background Management-Model Management Page

The above is one of the application scenarios of NLP technology in Yixin: intelligent chat robot. In the next article, we will introduce another application scenario and build a customer portrait. Please look forward to ~

Yixin Institute of Technology