A summary of the practice of ElasticSearch, a pre-loan system.

  elasticsearch, Index

The pre-lending system is responsible for the implementation of all business processes from receipt to pre-lending, which involves some comprehensive queries with large amount of data, diverse conditions and complex conditions. The introduction of ElasticSearch is mainly to improve the query efficiency, and hopes to quickly implement a simple data warehouse based on ElasticSearch and provide some OLAP-related functions. This article will introduce the practical experience of the pre-loan system ElasticSearch.

I. index

Description: A data structure designed to quickly locate data.

An index is like a directory in front of a book, which can speed up the query of a database. Understanding the structure and use of indexes is very helpful to understand the working mode of ES.

Common indexes:

  • bitmap index
  • Hash index
  • BTREE index
  • reverse index

1.1 BitMap Index

Bitmap index is applicable when the field value is an enumerable finite number.

BitMap index uses a binary digit string (Bitmap) to identify whether the data exists, 1 to identify the data exists in the current position (serial number), and 0 to indicate that there is no data in the current position.

The following figure 1 is a user table, which stores two fields of gender and marital status.

In fig. 2, two bitmap indexes are respectively established for gender and marital status.

For example, the index of gender-> male is 101110011, which means the 1st, 3rd, 4th, 5th, 8th and 9th users are male. Other attributes and so on.

Query using bitmap index:

  • Male and married records = 101110011 & 11010010 = 100100010, i.e. the 1st, 4th and 8th users are married men.
  • Female or unmarried records = 01001100 | 0010100 = 011011100, i.e. the 2nd, 3rd, 5th, 6th and 7th users are female or unmarried.

1.2 Hash Index

As the name implies, it refers to the index structure that implements key->value mapping using some hash function.

Hash index is suitable for equivalent retrieval, and the location of data can be located through one hash calculation.

Figure 3 below shows the structure of the hash index. Similar to the implementation of HashMap in JAVA, hash conflicts are resolved by using a conflict table.

1.3 BTREE index

BTREE index is the most commonly used index structure in relational databases, which facilitates data query operations.

BTREE: Orderly balanced N-level tree, each node has N key values and N+1 pointers to N+1 child nodes.

The simple structure of a BTREE is shown in Figure 4 below. It is a 2-layer 3-fork tree with 7 pieces of data:

Taking Mysql’s most commonly used InnoDB engine as an example, the application of BTREE index is described.

The tables under Innodb are stored in the form of index organization tables, that is, the entire data table is stored in a B+tree structure, as shown in Figure 5.

The primary key index is the left half of fig. 5 (if there is no explicit definition of the autonomous primary key, a unique index that is not empty is used as the cluster index; if there is no unique index, a hidden primary key of 6 bytes is automatically generated inside innodb to be the cluster index), and the leaf node stores the complete data row information (stored in the form of primary key+row_data).

The secondary index is also stored in the form of B+tree. in the right half of fig. 5, unlike the primary key, the leaf node of the secondary index stores not row data, but index key values and corresponding primary key values. thus, it can be inferred that the secondary index query has one more step to find the primary key of data.

To maintain an orderly and balanced N-fork tree, it is more complicated to adjust the node position when inserting nodes, especially when the inserted nodes are random and unordered. However, when inserting orderly nodes, the adjustment of nodes only takes place in part of the whole tree, which has a smaller influence range and higher efficiency.

You can refer to the insertion algorithm of nodes in red and black trees:

https://en.wikipedia.org/wiki …

Therefore, if the innodb table has a self-increasing primary key, the data writing is orderly and the efficiency will be very high. If innodb table does not have a self-increasing primary key, inserting random primary key values will lead to a large number of change operations of B+tree, which is inefficient. This is also why it is suggested that innodb tables should have self-increasing primary keys with or without business significance, which can greatly improve the efficiency of data insertion.

Note:

  • Mysql Innodb uses self-increasing primary keys for high insertion efficiency.
  • Using the ID generation algorithm similar to Snowflake, the generated ID is increasing in trend and the insertion efficiency is relatively high.

1.4 Inverted Index (Reverse Index)

Inverted index is also called reverse index, which can be compared with forward index.

Positive index reflects the correspondence between a document and keywords in the document. Given the document identifier, the keyword, word frequency and location information of the word in the current document can be obtained. as shown in fig. 6, the document is on the left and the index is on the right.

Reverse index refers to the corresponding relationship between a keyword and the document in which the word is located. Given the keyword identifier, you can obtain a list of all documents where the keyword is located, including information such as word frequency and location, as shown in fig. 7.

The set of words in reverse index (inverted index) and the set of documents form the “word-document matrix” as shown in fig. 8, and the marked cell indicates that there is a mapping relationship between the word and the document.

The storage structure of the inverted index can be referred to fig. 9. The dictionary is stored in the memory, and the dictionary is a list set of all words parsed out in the whole document set. Each word points to its corresponding inverted list. The set of inverted lists forms an inverted file, which is stored on the disk. The inverted list records the information of the corresponding word in the document, i.e. the aforementioned word frequency, location and other information.

The following is a specific example to describe how to generate inverted indexes from a document set.

As shown in fig. 10, there are 5 documents, the first column is the document number and the second column is the text content of the document.

The above document set is analyzed by word segmentation, and 10 words found are: [Google, map, father, job-hopping, Facebook, alliance, founder, Lars, departure, and], taking the first word “Google” as an example: first, it is given a unique IDentification “word id”, with a value of 1, and the document frequency is counted as 5, i.e. all 5 documents appear, except for 2 in the third document, all other documents appear once, thus the inverted index shown in fig. 11 is obtained.

1.4.1 word dictionary query optimization

For a large document set, it may contain hundreds of thousands or even millions of different words. Whether a word can be quickly located will directly affect the response speed during searching. The optimization scheme is to establish an index for the word dictionary. The following schemes are available for reference:

  • Dictionary Hash index

Hash index is simple and direct. Query a word and calculate the hash function. If the hash table hits, the data exists; otherwise, it can be returned to empty directly. Suitable for exact matching and equivalent query. As shown in Figure 12, words with the same hash value will be placed in a conflict table.

  • BTREE index

Similar to Innodb’s secondary index, words are sorted according to certain rules to generate a BTree index, and the data node is a pointer to the inverted index.

  • binary search

Similarly, the words are sorted according to certain rules, and an ordered word array is established, and a binary search method is used in searching; The binary search method can be mapped into an ordered balanced binary tree, as shown in fig. 14.

  • Fst implementation

FST is a finite state transfer machine. FST has two advantages: 1) it occupies less space. By reusing the prefix and suffix of words in the dictionary, the storage space is compressed. 2) fast query speed. O(len(str)) query time complexity.

Take the insertion of “cat”, “deep”, “do”, “dog” and “dogs” as examples to construct FST (note: must be sorted).

As shown in Figure 15, we finally got the above directed acyclic graph. This structure can be used to query conveniently. For example, given a word “dog”, we can query whether there is a word or not conveniently through the above structure. Even we can associate a word with a certain number or word during the construction process, thus realizing the key-value mapping.

Of course, there are other optimization methods, such as using Skip List, Trie, double arraytree and other structures for optimization, which will not be repeated here.

Second, the use of ElasticSearch experience

The following is a summary of ES’s experience based on specific use cases of the pre-loan system.

2.1 Overview

ES version currently in use: 5.6

Official website address:https://www.elastic.co/produc …

A word from ES: The Heart of the Elastic Stack

Some key information of ES:

  • First released in February 2010
  • Elasticsearch Store, Search, and Analyze
  • Rich Restful Interface

2.2 Basic Concepts

  • Index

The Index of ES, that is, index, and the index mentioned earlier are not a concept. This refers to the collection of all documents and can be compared to a database in RDB.

  • Document

That is, a record written to ES, usually in JSON form.

  • Mapping

The metadata description of the document data structure is generally in JSON schema form and can be dynamically generated or predefined in advance.

  • Type

Due to errors in understanding and use, type is no longer recommended. Currently, only one default type is established for one index in ES we use.

  • Node

A service instance of ES is called a service node. In order to realize the safety and reliability of data and improve the query performance of data, ES is generally deployed in cluster mode.

  • Cluster

A plurality of ES nodes communicate with each other and share data storage and query together, thus forming a cluster.

  • zone

Fragmentation is mainly to solve the storage of a large amount of data, dividing the data into several parts. Fragmentation is generally evenly distributed on each ES node. Note: The number of slices cannot be modified.

  • copy

A complete copy of fragmented data, generally a fragment will have a copy, which can provide data query and improve query performance in cluster environment.

2.3 Installation and Deployment

  • JDK Version: JDK1.8
  • The installation process is relatively simple, please refer to the official website: download installation package-> decompress-> run
  • Pit encountered during installation:

ES startup takes up more system resources and needs to adjust system parameters such as file handle number, thread number, memory, etc. Please refer to the following documents.

http://www.cnblogs.com/slovel …

2.4 Example Explanation

The following are some specific operations to introduce the use of ES:

2.4.1 Initialization of Index

Initialization index is mainly to create a new index in ES and initialize some parameters, including index name, document Mapping, index alias, number of slices (default: 5), number of copies (default: 1), etc. The number of slices and number of copies can directly use the default value when the amount of data is small without configuration.

Here are two ways to initialize indexes: one uses Dynamic Mapping based on Dynamic Template, and the other uses explicit predefined mapping.

1) Dynamic Template

<p style="line-height: 2em;"><span style="font-size: 14px;">curl -X PUT http://ip:9200/loan_idx -H 'content-type: application/json' <br>    -d '{"mappings":{ "order_info":{ "dynamic_date_formats":["yyyy-MM-dd HH:mm:ss||yyyy-MM-dd],<br> "dynamic_templates":[<br> {"orderId2":{<br> "match_mapping_type":"string",<br> "match_pattern":"regex",<br> "match":"^orderId$",<br> "mapping":{<br> "type":"long"<br> }<br> }<br> },<br> {"strings_as_keywords":{<br> "match_mapping_type":"string",<br> "mapping":{<br> "type":"keyword",<br> "norms":false<br> }<br> }<br> }<br> ]<br> }<br>},<br>"aliases":{<br> "loan_alias":{}<br>}}'<br></span></p>

The JSON string above is the dynamic template we use, which defines the date format: the dynamic_date_formats field; The rule orderId2 is defined: whenever the orderId field is encountered, it is converted to long; The rule strings_as_keywords is defined: all fields that encounter string type are mapped to keyword type, and the norms property is false;; The keyword type and norms keyword will be described in the data type section below.

2) Predefined mapping

The difference between the predefined mapping and the above is that all known field type descriptions are written into the mapping in advance. The following figure intercepts a part as an example:

The upper part of JSON structure in fig. 16 is the same as the dynamic template. the contents in the red box are predefined attributES: fields such as apply.apply info.apply submissiontime, apply.apply info.apply id, apply.apply info.apply inputsource, etc. type indicates the type of the field. after mapping definition is completed, the inserted data must conform to the field definition, otherwise es will return an exception.

2.4.2 Common Data Types

Common data types are Text, Keyword, Date, Long, Double, Boolean, IP

In actual use, the string type is defined as keyword instead of text, the main reason is that text type data will be used as text for syntax analysis, word segmentation, filtering and other operations, while keyword type is stored as a complete data, eliminating redundant operations and improving index performance.

Also used with keyword is a keyword norm, set to false to indicate that the current field does not participate in scoring; The so-called scoring refers to assigning a score to the results of the query according to TF/IDF or some other rules for ranking when displaying the search results, while the general business scenarios do not need such ranking operations (all have clear ranking fields), thus further optimizing the query efficiency.

2.4.3 Index name cannot be modified

When initializing an index, an index name must be explicitly specified in the URL. Once specified, it cannot be modified. Therefore, a default alias is usually specified when creating an index:

<p style="line-height: 2em;"><span style="font-size: 14px;">"aliases":{ "loan_alias":{ }<br> }<br></span></p>

Aliases and index names are many-to-many relationships, that is, an index can have multiple aliases, and an alias can also map multiple indexes; In the one-to-one mode, all places where index names are used can be replaced by aliases; The advantage of alias is that it can be changed at any time and is very flexible.

2.4.4 Existing fields in Mapping cannot be updated

If a field has been initialized (dynamic mapping is done by inserting data and predefined by setting the field type), then the type of the field is determined. If incompatible data is inserted, an error will be reported. For example, a long type field is defined. If a non-numeric type of data is written, ES will return a hint of data type error.

In this case, it may be necessary to rebuild the index, and the alias mentioned above will come in handy. Generally completed in 3 steps:

  • Create a new index to specify malformed fields as the correct format;
  • 2) Migrate data from the old index to the new index using the Reindex API of ES;
  • 3) Use Aliases API to add the alias of the old index to the new index and delete the association between the old index and the alias.

The above steps are suitable for offline migration, and it is slightly more complicated to realize live migration without downtime.

2.4.5 API

The basic operation is to add, delete, modify and check. Please refer to the official documents of ES:

https://www.elastic.co/guide/ …

Some more complicated operations require ES Script, which generally uses the painless script like Groovy. this script supports some commonly used JAVA API(ES (esinstallation uses JDK8, so it supports some JDK8 apis), and also supports Joda time, etc.

Let’s give a more complicated update example to illustrate how painless script is used:

Requirements description

AppSubmissionTime means the time of receipt, lenssonStartDate means the time of commencement, expectLoanDate means the time of loan. It is required that the delivery time on September 10, 2018 is set as the delivery time if the difference between the delivery time and the commencement time is less than 2 days.

Painless Script reads as follows:

<p style="line-height: 2em;"><span style="font-size: 14px;">POST loan_idx/_update_by_query<br>    { "script":{ "source":"long getDayDiff(def dateStr1, def dateStr2){ <br>    LocalDateTime date1= toLocalDate(dateStr1); LocalDateTime date2= toLocalDate(dateStr2); ChronoUnit.DAYS.between(date1, date2);<br>  }<br>  LocalDateTime toLocalDate(def dateStr)<br>   { <br>    DateTimeFormatter formatter = DateTimeFormatter.ofPattern(\"yyyy-MM-dd HH:mm:ss\"); LocalDateTime.parse(dateStr, formatter);<br>     }<br>  if(getDayDiff(ctx._source.appSubmissionTime, ctx._source.lenssonStartDate) < 2)<br>  { <br>    ctx._source.expectLoanDate=ctx._source.appSubmissionTime<br>   }", "lang":"painless"<br> }<br> , "query":<br> { "bool":{ "filter":[<br> { "bool":{ "must":[<br> { "range":{ <br> "appSubmissionTime":<br> {<br>  "from":"2018-09-10 00:00:00", "to":"2018-09-10 23:59:59", "include_lower":true, "include_upper":true<br> }<br> }<br> }<br> ]<br> }<br> }<br> ]<br> }<br> }<br>}<br></span></p>

Explanation: The whole text is divided into two parts. The query keyword in the lower part represents a query by range time (September 10, 2018). The script in the upper part represents the operation on the matched records. It is a Groovy-like code (easy to read with Java foundation). After formatting, it is as follows. Two methods getDayDiff () and toLocalDate () are defined. The if statement contains specific operations:

<p style="line-height: 2em;"><span style="font-size: 14px;">long getDayDiff(def dateStr1, def dateStr2){<br> LocalDateTime date1= toLocalDate(dateStr1);<br> LocalDateTime date2= toLocalDate(dateStr2);<br> ChronoUnit.DAYS.between(date1, date2);<br>}<br>LocalDateTime toLocalDate(def dateStr){<br> DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");<br> LocalDateTime.parse(dateStr, formatter);<br>}if(getDayDiff(ctx._source.appSubmissionTime, ctx._source.lenssonStartDate) < 2){<br> ctx._source.expectLoanDate=ctx._source.appSubmissionTime<br>}<br></span></p>

Then submit the POST request to complete the data modification.

2.4.6 Query Data

Here, we mainly recommend an ES plug-in ES-SQL:

https://github.com/NLPchina/e …

This plug-in provides rich SQL query syntax, allowing us to use familiar SQL statements for data query. Among them, there are several points that need attention:

  • ES-SQL is sent by Http GET, so the length of SQL is limited (4kb) and can be modified by the following parameters: HTTP. MAX _ INITIAL _ LINE _ LENGTH: “8K”
  • If the field is set to a non-numeric type, the direct use of ESQL will report an error and the painless script can be used instead.
  • The results obtained by using Select as syntax are different from the general query results in terms of data location structure and need to be processed separately.
  • NRT(Near Real Time): quasi real time

Insert a record into ES, and then query it out, generally can find out the latest records, ES gives the impression that is a real-time search engine, this is what we expect, but the actual situation is not always the case, this is related to ES writing mechanism, make a brief introduction:

  • Lucene index segment -> ES index

The data written into ES is first written into Lucene index segment and then written into ES index. All the data found before writing into ES index are old data.

  • Commit: atomic write operation

The data in the index segment will be written into the ES index by atomic writing, so a record submitted to ES can ensure complete write success without worrying that only one part is written and the other part fails.

  • Refresh: refresh operation can ensure that the latest submission is searched

After the index segment is submitted, there is still one last step: refresh. only after this step is completed can the data of the new index be searched.

Due to performance considerations, Lucene postponed the time-consuming refresh, so it will not refresh every time a document is added, and it will refresh once per second by default. This kind of refresh has been very frequent, but many applications need faster refresh frequency. If this is the case, either use other technologies or examine whether the demand is reasonable.

However, ES provides us with a convenient real-time query interface. The data queried using this interface is always up to date. The call method is described as follows:

GEThttp://IP:PORT/index_name/type_name/id

The above interface uses HTTP GET method to query based on the data primary key (id). This query method will search the data in ES index and Lucene index segments at the same time and merge them, so the final result is always up to date. However, there is a side effect: every time the operation is completed, ES will force the refresh operation, resulting in an IO. If it is used frequently, it will also affect ES performance.

2.4.7 array processing

Array processing is quite special, take it out and talk about it separately.

1) The representation is a common JSON array format, such as:

[1, 2, 3]、 [“a”, “b”]、 [ { “first” : “John”, “last” : “Smith” },{“first” : “Alice”, “last” : “White”} ]

2) It should be noted that there is no array type in ES, which will eventually be converted to object, keyword and other types.

3) The problem of querying common array objects.

For storage of common array objects, the fields will be stored separately after the data is leveled, such as:

<p style="line-height: 2em;"><span style="font-size: 14px;">{ "user":[<br> { "first":"John", "last":"Smith"<br> },<br> { "first":"Alice", "last":"White"<br> }<br> ]<br>}<br></span></p>

Will be converted to the following text

<p style="line-height: 2em;"><span style="font-size: 14px;">{ "user.first":[ "John", "Alice"<br> ], "user.last":[ "Smith", "White"<br> ]<br>}<br></span></p>

The association between the original texts was broken. Figure 17 shows the brief process of this data from entering the index to querying:

  • Assemble data, a JSONArray structure of text.
  • When ES is written, the default type is set to object.
  • Query the document whose user.first is Alice and user.last is Smith (there is actually no document that satisfies both conditions).
  • Results that did not meet expectations were returned.

4) Nested array object query

Nesting array objects can solve the problem of inconsistent query above. ES’s solution is to create a document for each object in the array independently of the original document. As shown in fig. 18, after the data is declared as nested, the same query is made again, and the return is empty, because there really is no document with user.first as Alice and user.last as Smith.

5) Generally, the modification to the array is complete. If a field needs to be modified separately, it is necessary to use painless script for reference:https://www.elastic.co/guide/ …

2.5 safety

Data security is a crucial link, providing data access security control mainly through the following three points:

  • XPACK

XPACK provides a Security plug-in, which can provide access control based on user name and password, and can provide a one-month free trial period, after which a certain fee is charged in exchange for a license.

  • IP whitelist

It refers to opening the firewall on the ES server and configuring that only a few servers in the intranet can directly connect to this service.

  • Agent

Generally, business systems are not allowed to query directly connected ES services. A layer of packaging is required for ES interfaces, which requires agents to complete. And the proxy server can do some security authentication work, even if XPACK is not applicable, security control can be realized.

2.6 network

The ElasticSearch server needs to open 9200 and 9300 ports by default.

The following mainly introduces a network-related error. If you encounter similar errors, you can learn from them.

  • Before drawing out the anomaly, first introduce a network-related keyword, keepalive:
  • Tcp keepalive and tcpkeep-alive.

“Connection: Keep-Alive” is enabled by default in HTTP1.1, which means that this HTTP connection can be reused and the current connection can be directly used in the next HTTP request, thus improving performance. Keep-Alive is commonly used in HTTP connection pool implementation.

The role of TCP keepalive is different from that of HTTP. TPC is mainly used to keep the connection alive. The relevant configuration is mainly the parameter net.ipv4.tcp_keepalive_time, which indicates that if a TCP connection does not exchange data after a long period of time (the default is 2 hours), a heartbeat packet is sent to detect whether the current link is valid or not. Under normal circumstances, an ack packet from the other party will be received, indicating that the connection is available.

Specific exception information is described as follows:

Two service servers, ES cluster (cluster has three machines) connected by restClient (based on HTTPClient, realizing long connection), are deployed on different network segments with ES server respectively, and an exception will occur regularly:

Every day at about 9 o’clock, there will be an abnormal Connection reset by peer. moreover, there are three Connection reset by peer in a row.

<p style="line-height: 2em;"><span style="font-size: 14px;">Caused by: java.io.IOException: Connection reset by peer <br> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) <br> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) <br> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) <br> at sun.nio.ch.IOUtil.read(IOUtil.java:197)<br></span></p>

In order to solve this problem, we have tried a variety of solutions, checking official documents, comparing codes and grabbing bags. . . After several days of efforts, it was finally found that the anomaly was related to the keyword keepalive mentioned above (thanks to the help of colleagues in the operation and maintenance group).

In the actual online environment, there is a firewall between the service server and the ES cluster, and the firewall policy defines the idle connection timeout as, for example, 1 hour, which is inconsistent with the above-mentioned default of, for example, 2 hours for linux servers. Due to the low number of visits to our current system at night, some connections were not used for more than 2 hours. The firewall automatically terminated the current connection after 1 hour. After 2 hours, the server tried to send heartbeat keep-alive connection and was directly blocked by the firewall. After several attempts, the server sent RST to interrupt the connection, but the client did not know at this time. When the invalid link request was used the next morning, the server directly returned RST, and the client reported an error Connection reset by peer. It tried that all three servers in the cluster returned the same error, so it reported 3 identical exceptions in succession. The solution is also relatively simple. Modify the server keepalive timeout configuration to be less than 1 hour of the firewall.

References

An in-depth understanding of ElasticSearch

http://www.cnblogs.com/Creato …

https://yq.aliyun.com/article …

http://www.cnblogs.com/LBSer/ …

http://www.cnblogs.com/yjf512 …

Author: Lei Peng, Comprehensive Credit

Source: Yixin Institute of Technology