Content Source: First Technical Salon of Yixin Institute of Technology-Live Online |AI Central Station: An Agile Intelligent Business Support Scheme
Presenter Introduction: Director of AI Application Team of Yixin Technology Research and Development Center of Jing Yuxin
In this article, the word count is 13479, and the reading time is 34 minutes.
Reading Guidance: With the proposal and successful practice of “Data in Taiwan”, various enterprises have started their own process of nationalization under the consensus of “Large, Medium and Small Front Desk”. A series of technologies, represented by Data in Taiwan, Technology in Taiwan and Business in Taiwan, have greatly enhanced the agility of business and improved the organizational efficiency. At the same time, with the development of intelligent technology, the proportion of AI application in business research and development is gradually increasing, but the complexity of AI model training leads to slow development and low efficiency, which seriously affects the flexibility of business.
In view of this situation, can the AI research and development work in the business be specially supported based on the idea of China-Taiwan, providing quick realization of intelligent requirements and flexible trial and error functions, so as to enhance the enterprise’s intelligent innovation capability? How should the construction and implementation of the AI middle station proceed?
Dr. Yu Xin Jing, head of AI application team of Yixin research and development center, discussed the above issues in combination with Yixin’s current actual business and the implementation of China-Taiwan strategy in Yixin. The following is a true record of this sharing.
I. proposal of AI central station
II. Targets and Definitions of AI Intermediate Stations
III. Implementation Route of AI China and Taiwan
Four, case analysis-intelligent care robot as an example
Share a memoir
I. proposal of AI central station
1.1 Rise of China-Taiwan Strategy
Since the China-Taiwan strategy was put forward and successfully implemented, the industry has responded strongly, and domestic enterprises have started their own process of China-Taiwan integration. In particular, all parties have their own interpretations and insights on the construction of China-Taiwan Data Center, which is at the core of the strategy.
However, in general, the industry has formed some consensus on the strategy of China and Taiwan, namely, advocating “large, medium and small front offices”. By constructing China and Taiwan, sharing services can be precipitated, service reuse rate can be improved, integration and cooperation barriers between “chimney” and “project system” systems can be broken down, trial and error costs for front office services can be reduced, rapid innovation capability can be given to businesses, and finally organizational efficiency of enterprises can be improved.
Whether in the financial, online trading, information, medical or educational industries, the industry’s discussion of China-Taiwan strategy includes all aspects of the daily activities of enterprises, such as China-Taiwan in business, China-Taiwan in technology, China-Taiwan in mobile, etc. However, in the data era, a large number of businesses in enterprises run on big data, and the response and processing capabilities of data determine the business efficiency. Therefore, the most important and starting point for the implementation of China-Taiwan strategy is still China-Taiwan in data. The Data Center realizes the unification of data standards within the organization, breaks down data barriers, constructs unified data entities, and provides unified data services to the outside world. Through these three “unifications”, the data asset center within the organization is realized, providing automated and self-service agile data capability output for front-office business.
The advantage of automation is that it can greatly save the cost of conventional data operations, while self-service data management can support business users to acquire and process data in a self-service manner according to their own needs, thus realizing business requirements flexibly. However, compared with the traditional “chimney” data system, these two advantages only make the business side feel that the data service is more usable and easy to use. In order to really make the data service easy to use, and even make the business side like to use it, whether it is a data medium or other medium service, it cannot be separated from the intelligent capability.
1.2 Localization of Intelligent Business Requirements in China
Business intelligence requirements come from two aspects:
On the one hand, from the data level, as more and more data are available, it will become more and more difficult for us to identify valuable information, discover data relationships, and grasp data trends. Only by managing big data intelligently can we improve and even innovate our business. Based on the exploration of big data, the discovery of potential data links and trends can provide strong support for business optimization and innovation and realize real data-driven business. Therefore, the data center must have intelligent capability and be able to provide certain intelligent data analysis capability for the business.
On the other hand, besides the bottom-up intelligent drive based on data, there is also a top-down enterprise intelligent concept drive. In recent years, many intelligent technologies have become more and more mature, and the corresponding intelligent concepts are also deeply rooted in the hearts of the people. A large number of successful intelligent cases have made these technologies gradually accepted by the mainstream of the industry, and even become standard solutions in practice, such as e-commerce needs recommendation, finance needs wind control, and then researchers are required to accurately and quickly realize the intelligent goals proposed by the front desk on the basis of data.
The above figure is an example. The data comes from Roland Berger. As a financial technology company, Yixin is more faced with the demand for intelligent business in the financial field. It can be seen from the figure that there are so many standardized intelligent applications in the field of finance. It is conceivable that intelligence is playing a very important role in the development of enterprise business today.
No matter what the requirements are, there will always be a problem: there are various and different requirements for intelligent business. once a requirement comes down, the research and development team needs to carry out targeted data analysis and processing, model building training, etc. the process is complicated and complicated, and the efficiency is not high, thus prolonging the demand response time, reducing the business agility and increasing the trial and error cost. This conflicts with the hope of the business front desk to focus on business logic and flexibly respond to changes under the strategic background of China and Taiwan, and this contradiction is becoming more and more common with the extensive application of intelligence.
The reason for this is, on the one hand, due to the large-scale rise of intelligence in just a few years, the research and development of intelligent applications is still in a relatively primitive stage, lacking a complete life cycle management theory and corresponding management framework tools; On the other hand, it reflects that our China and Taiwan capabilities have not fully covered the heavy, repetitive and inefficient links in front-office business research and development.
Then, we naturally think, can we make further intelligent transition to the existing data station to solve the above problems? If so, what kind of data intelligence capability can or should the data center provide? If not, how should the China-Taiwan strategy promptly support intelligent business requirements?
1.3 From Data Intermediate Station to AI Intermediate Station
First, let’s look at the intelligent support capability of the data medium station, and try to analyze the following questions: Can the data medium station fully support various intelligent requirements under the current business background through a common intelligent data model? The answer is more difficult because of the complexity of the business intelligence process.
Common machine learning tasks include regression, classification, labeling, clustering, recommendation, etc. The implementation of each algorithm model also includes data preprocessing, feature analysis, modeling, training, deployment, etc. The actual application may even include multiple models.
However, the data center is centered on data. If its intelligence capability is to support all the above links, the workload will definitely not be small, and it will deviate from the original target of the data center. Therefore, it is inappropriate for the data center to undertake all the intelligent business support.
In detail, we can analyze from the current content covered by artificial intelligence (AI). Broadly speaking, artificial intelligence refers to the use of scientific methods and technologies to develop machine systems that can imitate, extend and expand human intelligence, involving computer science, mathematics, philosophy, psychology and other disciplines. In a narrow sense from the perspective of computer science, artificial intelligence refers specifically to a computer system that can accept and process external data and can make human-like analysis and decision, covering many sub-fields such as data mining, machine learning, in-depth learning and reinforcement learning. Unless otherwise specified, artificial intelligence as used herein refers to the latter.
Among these kinds of tasks, the objectives and implementation processes of machine learning, in-depth learning and reinforcement learning are quite similar, so they are generally referred to as machine learning tasks directly. This paper also adopts this kind of general statement. However, data mining tasks are not quite the same as machine learning tasks, and their differences have troubled many people.
In fact, according to the definition in Data Mining and Forecast Analysis, “Data Mining is the process of discovering useful patterns and trends from large data sets”, which includes subtasks such as data preprocessing, data exploration, data dimensionality reduction, data statistics, association analysis, outlier analysis, etc. These are the basis of machine learning.
On the other hand, data mining also includes the following data clustering, data prediction and data classification, which are just some of the research contents of machine learning. Due to the vigorous development and excellent performance of machine learning, this part of the work is generally done by machine learning.
In a word, both are important research directions of artificial intelligence, and are also important links in the process of enterprise big data intelligence. They are interrelated with each other, but their emphases are different: data mining is more focused on “insight”, while machine learning is more focused on “learning” and “prediction”.
From the above analysis, it can be seen that under the current business background, the data mining work engaged in the “insight” task focuses on data and can be completed automatically without manual assistance. In addition, since it does not involve data prediction, classification and other tasks, data mining usually adopts relatively fixed analysis algorithms and models, so this part of work can be fully automated and self-help, integrated into the data center and provided to business users as fixed intelligent data model services.
On the other hand, machine learning, which is engaged in “learning” and “forecasting” tasks, is relatively more complicated:
- The implementation process of machine learning is usually highly computationally intensive;
- The algorithm and model structure may be the same, but they all need personalized parameter training according to business data.
- The training phase usually goes through many iterations.
- In addition to some unsupervised learning, implementation usually requires the participation of manual labeling.
- The running performance of online models needs to be monitored and updated according to data evolution.
After understanding the classification of artificial intelligence, let’s try to answer the questions raised earlier. If the data medium station is willing to support the intelligence requirements put forward by the business, how do we make the transition to the data medium station? In other words, how can Taiwan in the data transition its capabilities to support these requirements?
As can be seen from the above figure, the data center itself has the characteristics of data-centered, fixed algorithm and certain automaticity. We can fully use its streaming computing capability, batch computing capability and data visualization technology in the data center to support the corresponding business requirements.
These are all functions already possessed by the data station itself. What other capabilities do you need to have if you want to use the data center to support AI projects for machine learning? As shown in the above figure, it expands outward circle by circle. First of all, complex feature engineering support capability and model training capability are required. Secondly, it needs data annotation capability, model deployment capability and performance monitoring capability.
Each capability requires a lot of manpower, material resources and time to develop, and the correlation between the capability expansion from inside to outside and the data itself is from strong to weak, that is to say, with the continuous expansion of capability levels, the data center gradually deviates from its “data-centered” purpose, and also makes the data center bloated and complicated, which may bring more complexity problems when meeting the needs of front-office business. Therefore, the data medium station can support intelligent business requirements to some extent, but it is obviously not the best choice to rely solely on the data medium station to support all intelligent business requirements.
So how do we do it? Separate the AI middle station from the data middle station.
As shown in the above figure, we abstractly stripped out the several layers of capabilities that are sleeved outside the data center, and integrated them into an independent center layer. We rely on the data center to carry out certain cooperation and jointly meet the intelligent business requirements of the front desk. The data center mainly integrates intelligent algorithms and models of data mining and data insight. The new China Taiwan is mainly responsible for the research and development of complex learning and forecasting intelligence requirements. We call this middle station “AI middle station”.
The above figure is a structural diagram showing the separation between the AI middle station and the data middle station. From top to bottom, there are respectively the business front station, AI middle station and data middle station, and there are also some related computing and storage resources at the bottom.
- The data center provides basic capabilities, including data standardization, data materialization, and data service unification. It also supports some intelligent requirements for data processing, including intelligent data model, correlation analysis, principal component analysis, anomaly analysis, etc. The data center is mainly responsible for data exploration.
- The AI central station provides a series of tightly coupled AI capabilities such as model design training, model/algorithm library, reuse annotation management, and monitoring services. AI Middle Station is engaged in the task of learning prediction.
- The business front desk includes various demands for agility support from the middle desk, such as business intelligence demands, user segmentation demands, personalized recommendation demands, etc.
It is worth noting that the structure shown in the above figure is only an example. The central station is mainly service-oriented in order to respond to service requirements more promptly. Therefore, the central station system should be set up for services. For example, some front-office services do not require the AI central station to directly drop to the data central station for processing.
So far, we have answered the above questions. It is not enough to rely solely on the data midrange to support the agile implementation of business intelligence requirements, but we can build a special AI midrange to provide this capability based on it. In the strategy of China-Taiwan transformation, China-Taiwan transformation cannot be realized by relying solely on data-China-Taiwan. Ali’s shared service center also includes business, technology, data and other aspects. Each enterprise should reasonably abstract its own China-Taiwan service model and implement it according to its own business structure and process.
II. Targets and Definitions of AI Intermediate Stations
The previous article explained the background and reason of building the AI middle station through the analysis of the intelligent business requirements and the data middle station, but what is the goal of the AI middle station? What are its basic requirements and capabilities? Next, we will discuss this in detail.
2.1 AI Task Partition and Agile Requirements
The AI middle station needs to flexibly support various AI tasks and solve the needs and pain points in the process of agility of various tasks. At present, enterprises have different demands for intelligence, resulting in various AI tasks.
There are many ways to divide AI task types, such as regression, classification, clustering, labeling and so on.
Here we adopt another division method, and think that all AI tasks can be divided into two categories:
- One is a “horizontal” task, such as computer vision and natural language processing tasks, that provides basic AI learning, prediction and analysis capabilities for specific types of data in a certain business field.
- The other is a relatively specialized and personalized “vertical” task oriented to the specific needs of the business, such as intelligent wind control in the financial field, product recommendation in the e-commerce field and the construction of more common user portraits.
As far as these two types of AI tasks are concerned, no matter which type of tasks can be served independently, they can also be mixed and integrated with each other to form AI solutions to support more complex business scenarios. The core of building intelligent business applications is to decompose and map intelligent requirements into specific AI tasks and realize them one by one. Finally, reasonable arrangement and combination are carried out to realize the task objectives.
But on the other hand, in the implementation process of the two types of tasks, their agile requirements are different, and the service requirements that should be provided by the middle station of AI are also different. Relatively speaking, the agility of horizontal tasks is relatively easy to realize.
For horizontal tasks, except for some scenes, most of the time it does not directly solve the business requirements. It is often used as a basic model to initially process the data, and then some vertical tasks are used to meet the requirements. This also gives the algorithm implementation team sufficient time to fully carve the horizontal task model and improve its agility.
Due to the generality of data in the business field, we can completely pre-train a set of common business field-specific horizontal AI models, such as the common natural language understanding model in the financial business field. In this way, we only need to maintain and update this model, and all intelligent related requirements in this field can reuse this model library at any time, thus saving a lot of task training time.
Furthermore, we can even pre-train a horizontal AI model library in the whole field, so that even if we enter a brand new business field, based on this pre-training library, we can quickly create a common horizontal model in the field, such as ImageNet project in the field of computer vision, BERT technology introduced by Google in the field of natural language processing, etc.
Therefore, the horizontal basic AI task itself can support the intelligent business requirements by improving the generality and reusability of the model. A basic AI shared service platform (or the AI intermediate platform we wish to build) should provide a convenient reusable solution design and automatic deployment structure, a perfect model base, algorithm base management system, and a stable model operation environment.
For vertical tasks, the situation becomes more complicated. Vertical tasks have a wide range of requirements, most of which are customized development, with a variety of data. It is difficult to respond to the requirements by building a universal model like horizontal tasks. The development of the project requires special manual annotation, and the model needs repeated training and optimization. All of these require a lot of time and energy, resulting in the project spending most of its time and cost on these links, resulting in the slow development of AI application projects.
More importantly, in reality, the front desk is facing rapid changes in business. The greatest requirement for intelligent applications is not necessarily to optimize performance, but to develop rapidly, go online quickly, and produce immediate effects. In many cases, it can even tolerate performance. Obviously, the development speed of most vertical tasks cannot meet this requirement, which leads to a fierce contradiction between the rapid development of front desk business and the slow development of back office. If AI service is to be localized in Taiwan, then our AI in Taiwan must be able to solve the biggest pain point of slow research and development of vertical tasks.
The key to this pain point of vertical tasks lies in the complexity of its research and development process:
- In most vertical task projects at present, due to different requirements, the algorithm team will go through a series of steps such as data acquisition, processing, analysis, modeling, labeling, model training, optimization, verification, deployment, monitoring, updating, etc.
- However, each link has its own complexity, such as data access management, development of feature engineering, manual intervention in labeling process, repeated iteration of training and optimization, etc.
- In addition, there are many roles involved in the whole process, such as data analyst, algorithm engineer, labeling engineer, business analyst, etc. The management of roles is also complicated.
Therefore, the research focus on such complex tasks lies in the scientific management of the whole life cycle, as well as the optimization of the research and development process and each link. Through the separation of each link in the research and development process, we can optimize the task arrangement sequence to a certain extent, clearly locate the participating roles of each link, and shorten the time cost through task parallelism and role collaboration. However, in-depth discussion of each link or local link can abstract automated operations and reusable processes to further improve business response speed.
In addition, both horizontal and vertical tasks have some common basic requirements for the AI middle station.
First of all, the intelligent model needs unified access to data. The intelligent model needs a certain amount of training data in the training phase, and needs to interface with the production data after it goes online, and needs more data for later monitoring and updating. However, in practice, the data sources of each project are generally different, which leads to the research and development personnel having to manually apply, acquire, clean and preprocess the data according to the project situation each time, thus greatly affecting the efficiency. If we can interface with a unified data service platform or even a data center, this process will save a lot of time and energy.
Secondly, the intelligent model needs a stable model deployment and operation platform, as well as a corresponding model monitoring system to track the performance of the model at all times. Of course, a convenient model updating mechanism should also be in place so that we can update and upgrade the models as needed.
Thirdly, in the process of developing intelligent models, a series of resources such as computation and storage are needed. In most enterprise entities, many projects are computing resource training models provided by project teams themselves, and when they go online, they apply for production resources to configure the environment and deploy the projects. This fragmented resource management mode will inevitably lead to uncoordinated and wasteful use of resources. A reliable resource management system is needed to centrally control computing resources and provide flexible computing resource scheduling capabilities.
To sum up, based on the analysis of the two types of AI tasks mentioned above, we can make a summary of what should be done and what capabilities should be possessed by the AI middle station.
2.2 Targets and Capabilities of AI Intermediate Stations
AI China Taiwan is committed to solving the problems of slow response and low efficiency in the current research and development of enterprise intelligent applications, including but not limited to:
- “Chimney” development, high project cost, difficult integration, repeated process, lack of ability to precipitate;
- There are many research and development links, lack of optimization, coordination and automation assistance, and slow business response.
- Lack of standard guidance in model research and development, confusion in service interface, and difficulty in maintenance and management;
- Lack of unified data access channels, difficult data acquisition, inconsistent standards, repeated data preprocessing and feature engineering;
- Lack of a unified model operation and monitoring platform, as well as update and maintenance mechanisms;
- Decentralized management of basic resources has not been fully utilized, resulting in waste.
The above problems are common. It can be said that many current algorithm research and development teams are more like algorithm outsourcing teams. They build their own positions according to the needs of different business departments and gradually conquer the targets. The process is repetitive and the efficiency is limited. While AI China and Taiwan are striving to provide a powerful AI capability support center, which can provide fire support quickly according to business needs and achieve the goal quickly. Therefore, the capabilities that an AI medium station should have include:
- Multi-level reuse. Standardized research and development guidance for algorithms and models, and reusable service encapsulation capability;
- Unification of services. Unified service interface specification to support dynamic arrangement and combination of services;
- Process role optimization. Separation and optimization of R&D process, clear definition of R&D roles, support for task parallelism and role collaboration, and construction of AI product R&D assembly line;
- Automated iteration. It has the functions of automatic iteration and circulation within and between research and development links.
- Docking data platform. In the data, stations or other basic data services are connected, standardized data are quickly accessed, and even data is preprocessed.
- Operational monitoring. Provide a unified model operating environment and monitoring capabilities, as well as a model update mechanism;
- Resource control. Unified resource management, including computing resources and storage resources, supports flexible scheduling of resources.
Combined with the above capabilities, we give an exploratory definition for the AI intermediate station:
AI Middle Station is a complete set of intelligent model life cycle management platform and service configuration system. Based on data platform services, it provides support for the rapid construction of personalized intelligent services for front-office businesses through sharing and reusing intelligent services, managing relevant roles of intelligent service research and development, and standardizing and automating research and development processes.
III. Implementation Route of AI China and Taiwan
In the previous section, we introduced the background, capability and definition of the generation of AI intermediate stations. This section will focus on the implementation route of AI intermediate stations.
3.1 Main Components of AI Intermediate Station
The above figure shows the life cycle of AI product research and development. Business requirements come in and go through four major steps: business understanding, model learning, data processing and operation monitoring.
These four steps together with the management of China and Taiwan constitute the main components of AI China and Taiwan:
- Business understanding, design implementation plan, service arrangement and general plan template management according to business requirements;
- Data processing, including data acquisition, data preparation and analysis;
- Model learning includes feature engineering, model training and model evaluation, as well as reusable model library and algorithm library management.
- Operation monitoring includes model automatic deployment operation, performance monitoring and external service interface management.
- In addition, in order to facilitate the unified control of roles, authorities and resources of the AI central station, we have also set up a central station management module.
3.2 Construction from Platform to Middle Platform
We usually adopt the strategy of evolution from the platform to the central station when constructing the data central station, and the same is true for the AI central station.
The transition from the platform to the central platform needs to refer to common machine learning platforms, including training platforms, deployment/operation platforms, monitoring platforms, labeling platforms, modeling platforms, data processing platforms, etc.
We can complete the construction of the AI middle platform according to the existing platform. The modeling platform has the functions of business modeling and service/model modeling, and can be used in business understanding and model learning. The training platform has the function of model automatic training optimization evaluation, which can be used in model learning. Data processing links need data analysis and sample analysis, and data processing platform and labeling platform can be used. The deployment/operation platform and monitoring platform can provide support for operation monitoring. From this, we can see that we can complete the construction of the AI middle platform according to the existing platform.
The above figure is the capability map of AI central station.
- Whether the enterprise or AI training team starts from the infrastructure, including data access, high-performance computing resources, operating environment resources, etc.
- Then training tools are obtained on the basis of ensuring stability, including model training tracking ability, algorithm framework support ability, etc., to realize process automation.
- With the support of training tools, we can gather and centralize common services and links to form an AI platform, including configurable model/service structure, reusable model algorithm, etc. to form a standardized AI research and development process.
- In fact, the AI central station integrates and connects the existing capabilities to realize life cycle management, including service arrangement and sharing capability, scheme reusability, and overall process management capability, etc. to achieve efficiency improvement and high efficiency on top of standards.
The above figure maps the capability of the middle platform of AI to the components and platforms respectively, and distinguishes and corresponds with colors.
It is worth noting that here we only list some of the capabilities of China and Taiwan, and other capabilities may be included according to the business support needs of China and Taiwan, which need us to build. In addition, the platform’s support for China and Taiwan is limited, and we need to enrich the lack of functions or incomplete functions.
3.3 Process and Architecture of AI Intermediate Station
The above figure lists the main functional components required for the construction of the AI middle station according to the five components of the AI middle station, starting from the foreground business requirements.
- The business understanding part includes scheme template management, scheme design, service arrangement, service sharing, etc.
- The data processing part includes data display, data access, data analysis, data labeling, etc.
- The model learning part includes service design, feature processing, model training, model tracking, model base, algorithm base, etc.
- The operation monitoring part includes specific product packaging, automatic deployment, performance monitoring, access interface management, model updating and release testing, etc.
- The management of China and Taiwan includes role authority, resource management, tenant management and process control.
Mapping the above-mentioned functional components to the AI project life cycle results in the overall operation flow shown in the above figure.
- Starting from the business requirements, understand the business, including scheme template reference, scheme design, service arrangement, service sharing, etc. If you need to reuse other services, you can access and configure them here.
- The work of the data processing part is completed by the data center, which provides data reference upward and model training and monitoring support downward.
- The training part of the model forms a complicated cycle because it is an automatic iterative process.
- The encapsulation part involves functions such as monitoring and providing external access interfaces.
- China Taiwan Management provides construction support at the bottom.
The following parts of the operation process will be disassembled in detail.
Business Understanding Center
The operation flow of the business understanding center is shown in the above figure:
- After the business requirements come in, first obtain data analysis and reference from the data processing center, and collect data samples to provide visual support;
- Then make scheme selection: Do you have reusable scheme templates? “Yes” is the direct multiplexing scheme, which only needs to change the data; “No” is the plan design.
- The next step is to decompose the scheme into various services and arrange the services reasonably and effectively. What other services should be considered here for reuse?
- Finally, three aspects are output: output data acquisition requirements to the data processing center; Outputting product packaging guidance to the operation monitoring center; Output model training tasks to the model learning center.
The operation process of the Business Understanding Center mainly involves three roles:
- Business analyst, analyzing relevant scheme design and service arrangement;
- Data analyst, providing data suggestions and scheme design suggestions;
- Algorithm engineer, consider service arrangement, data interface between services, etc.
data processing center
The operation flow of the data processing center is shown in the above figure:
- Acquiring data requirement specifications from a service processing center, and docking a data intermediate station through data access;
- Data analysis function, data display and function visualization are provided upward from the data center.
- Obtaining reference through data display and labeling data;
- Operate data access, return to the data center, and reprocess the data.
- Finally, three aspects of external output are discussed: exporting data analysis and reference to the business understanding center; Outputting model training data to a model learning center; Output production data to the operation monitoring center.
The data processing center operation process mainly involves four roles:
- Data analysts are required to dabble in all the major links.
- Business analysts and algorithm engineer focus on data display.
- Labeling engineer, mainly involved in data labeling.
Model Learning Center
Model Learning Center is the main position of algorithm engineer. The operation flow of this part is shown in the above figure.
- Receive model service tasks from the business understanding center, training data from the data processing center, and performance correction information from the operation monitoring center to design services. How many models should be considered in service design? How are models connected in series? Are there reusable algorithms and models in the algorithm library and model library?
- After the service flow design is completed, the feature processing is carried out;
- Coding and training the feature input model;
- Inputting a model training result into a functional component of model tracking for model evaluation;
- The optimal train model obtained through iteration is output to that operation monitor center, and the output data is operated to the data processing cent at the same time.
Operation monitoring center
The operation monitoring center is a link directly related to business users, and the operation and maintenance personnel update the model and monitor the performance. The operation flow of this part is shown in the above figure:
- Receiving the production data provided by the data processing center, processing the output through the access interface, and writing back to the data processing center;
- Receiving the trained model service from the model learning center and the product packaging guidance from the business understanding center, and packaging, deploying, releasing and testing the product service in series; (If the product to be packaged is an update to the existing product, the existing model shall be reasonably started and stopped through the model update mechanism before deployment and release testing. )
- Providing interactive data to an access interface upwards and configuring the access interface; Provide performance indicators to performance monitoring. If any problem is found, report it to the model learning center for retraining.
AI mid-tier hierarchy
The hierarchy of the AI middle station is as shown above. The AI middle station is located between the data model service and the business solution, connecting the business up and communicating the data down. Each hierarchy has its own reusable mechanism.
The middle part is divided into three parts from top to bottom: business understanding, model learning and data processing. The operation monitoring on the right side encapsulates the products and models in a unified way and provides a unified access interface to the outside world. On the left is platform management that runs through the entire process, including role permissions, tenant management, process control, resource management, etc.
Four, case analysis-intelligent care robot
The implementation route of China-Taiwan has been introduced in detail above. This section will analyze how AI-Taiwan solves the intelligent demand in actual business based on the practical case of Yixin’s internal intelligent investment robot. (As the intelligent robot is a relatively large solution, appropriate abstraction and reduction are made here. )
4.1 intelligent care robot
Intelligent Investment and Care is to provide automated asset allocation advice and wealth management services online through artificial intelligence algorithms. For example, Yixin’s smart wealth management product, Toumi RA, is to help users make scientific asset allocation in an intelligent way so as to upgrade wealth management methods.
AI Services and Data Involved in Intelligent Care Robot;
- Intelligent interactions, such as chat robots;
- The customer portrait is already available for companies that have accumulated customers.
- Screening product pools to screen products with the best performance from the existing wealth management product pools. At present, our wealth management product pools can realize batch processing on a regular basis and automatically screen selected products with better performance in each period.
- Risk analysis, as an intelligent application in the financial field, is especially important. Users should have reasonable analysis on what risks they can bear and what returns they can obtain based on the risk value.
- Asset matching, Yixin currently has many types of assets, such as funds, stocks, real estate, etc., and will also carry out global asset matching, which requires scientific, rational and quantitative analysis, and comprehensive consideration of risk factors to realize asset matching.
- Investment product recommendation refers to the personal information, risk analysis, asset allocation, etc. in the user’s portrait to recommend the optimal income product for the user.
After understanding the characteristics of the intelligent robot, we will look at the implementation of this case in combination with the operation process of the AI central station.
4.2 Case Implementation
In the process of business understanding, what kind of business plan should be considered first? Is it reusable? Assuming that there is a reusable scheme, direct access to data is sufficient; If there is no reusable scheme, a new scheme needs to be designed.
As shown in the design guide on the right side of the above figure, data interface configuration and data source/role configuration need to be considered. For example, what are the data interfaces of the scheme? What services are involved? How to return? What are the relevant roles in each structure? Wait.
The most important thing is to consider which services are reusable. Assuming that the company already has a chat robot, it can be used here to receive customers and assume the function of intelligent chat. In addition, the service of portrait of wealth products has already been established, and the data source generated by the part of screening product words can be directly connected in. For asset matching service, we may have a relevant offline model and have already packaged it as a service. The above services can be reused, and the risk analysis service and subsequent investment product recommendation service need to be newly built.
After the reusable services and the services to be newly built are defined, each team can develop in parallel, and the role configuration is the same, so that the next stage can be entered very soon, thus improving the development efficiency.
Continuing the practice of the previous stage, the actual model design and training for risk analysis services were carried out.
Obtaining the model service job from the scheme design and designing the service flow, its input is a filtered user portrait. as shown in the structure on the right side of the above figure, two relatively simple models are designed: one is the risk tolerance evaluation model, which is also multiplexed with an existing feature screening model to extract and input the useful features suitable for the model from the user portrait into the model; The other is the liquidity demand model, which evaluates the asset demand. A Python code fragment is added here to process the data of the user portrait and then input it into the model. A new model has been built to merge and output the data.
The model can be used for automatic training and visual tracking. After the model arrangement is determined, the task is automatically sent to the engineer, which can be developed on the terminal through an interactive programming interface, and then the code is uploaded. The managed server can directly publish the code to the training cluster for automatic training, and then the training result is pushed to the tracking server to obtain relevant data for repeated iteration of model optimization. At the same time, the tracking server will record each index and model, and can select the optimal model to publish to the monitoring center.
Operation monitoring mainly encapsulates, tests and releases services and schemes, including interface configuration. You can test a single service or test the whole scheme together.
After the service goes online, the entire performance is reasonably monitored by providing visual support and relevant statistical reports. As shown in the above figure, once an alarm is found, the service can return to the model learning center for retraining.
The first few parts are all related to data processing. The data processing part is directly handed over to the data central station for completion. The AI central station only provides the access interface of the data central station. The main operations include: observing the data through the visualization tools of the data central station, preprocessing the data using the data central station data model, and marking the data of each model on the marking platform. Its ultimate goal is to support accessing platform-bound training data in the data during model training, such as files, databases and other data storage systems.
On the right side of the above figure is the open source tools of Yixin, including DBus, Wormhole, Moonbox, Davinci, which can help everyone to better build the data center.
The above part introduces the background, target, definition and implementation route of the station in AI.
- The construction of AI intermediate station can reuse existing technologies, capabilities and platforms, and is an agile intelligent business support scheme.
- AI Central Station is the product of intelligent development of data/business. Automation model is used instead of manual circulation to reduce resources and personnel costs.
- The capacity building of AI platform is based on data platform/central platform, facing the front-office business requirements and improving the ability to respond to business requirements.
- It is our ultimate goal to build an AI middle station through settling technology, sharing services, optimizing processes, integrating resources, reducing costs and improving efficiency. To achieve this goal, it will take a relatively long exploration and practice process.
From the platform to China and Taiwan, business-oriented transition is realized step by step, which is a gradual process and cannot be achieved overnight. The most important thing for an enterprise to implement the strategy of landing in Taiwan is to set up a Chinese-Taiwan system and methodology that meets the business needs based on its own business reality and specific research and development conditions, with the objectives of sharing services, integrating resources, reducing costs and improving efficiency.
Q1: What are the dimensions of the need to evaluate the importance of the demand in the AI? There are various business requirements. How to design reusable AI models?
A: The assessment of requirements should not be completed by the AI central office. When the business front office submits the requirements, it should conduct reasonable discussions with business analysis experts and requirements analysis experts to determine the priority of the project. The assessment dimensions mainly include the importance of the business, the scope of influence on customers, and the urgency of time. Generally, the assessment is completed in a special requirements analysis system.
The reusable design problem of AI model is too generalized. It is mainly realized from the business. For experienced architects, it is relatively easy to identify reusable design schemes with different granularity. Here is a brief discussion from different levels. We don’t need to say much about the algorithm level, but the model level mainly considers the functional granularity of a single model. Generally speaking, we don’t recommend that a model is too complex. For functions that are too complex, we usually use multiple models to implement each function separately, and then implement them through model layout in service design. The generality of the model needs to define the data interface of the model and the model structure, and consider the mechanism of model retraining and incremental training to facilitate model adaptation when reusing. In addition, the functional commonality of the model also needs attention. Reuse of service levels is relatively easy to identify and is a relatively fixed and independent scene service, such as chat robots, customer wind control, etc. Generally, services that need to be reused basically do not need too much retraining and adjustment, and are relatively fixed and can be called directly or after simple configuration. Reuse design of services is similar to the design of web services in SOA process, increasing the configurability of services. The reuse at the scheme level is relatively small, because the solution is already a relatively fixed set of products, and the reuse we advocate is also more similar to a template and guiding framework. The filling of the intermediate service model is implemented by users themselves, so the reuse design at the scheme level can be directly abstracted from important products.
Q2: Have these platforms landed yet? What is the effect on business promotion?
A: It has partly landed and is constantly improving. The research and development speed is fast, engineers save trouble, efficiency is high, and there are more intelligent products for business output:)
Q3: Can you tell me whether your AI central station is exporting abroad and whether it supports localized deployment?
A: When the AI is mature, China and Taiwan will consider releasing some of their capabilities in the form of tools. Localized deployment is of course within our consideration.
Q4: Will there be any unclear division of labor between the front desk and the central desk, and how can it be better solved?
A: Mapping to our R&D process, the division between the front desk and the middle desk is still clear. When determining the R&D plan, the front desk will only be responsible for the design and interactive management of the front desk business logic, and the remaining data functions and AI functions will be directly pushed to the middle desk modules such as technical middle desk, data middle desk and AI middle desk for support. However, the division between the front desk and the middle desk has been more clearly divided at the organizational structure level. The different business teams reflect the different nature of work. The only personnel role that may cross between the two is the business analysis expert, which may come from the front desk team, but its authority is limited. The division of roles is completely configured through the middle desk management. The roles that can be mapped in each link are different, so there will be no case of front desk business personnel intervening in the algorithm work, and the degree of technical personnel participating in the business analysis can also be managed. To sum up, the division between front desk and China-Taiwan is an important part of the enterprise’s China-Taiwan strategy. It is not only necessary to sort out the business process, but also to make unified adjustments to the organizational structure and personnel responsibilities.
[The above is all Dr. Jing Yuxin shares]