#### Front-end Artificial Intelligence: Derivation of Functional Equations through Machine Learning-Platinum Ⅲ # What is tensorflow.js

Tensorflow.js is a javascript library that can run on browsers and nodejs for machine learning and machine training. As we all know, it is very slow to calculate with javascript on browsers. tensorflow.js will accelerate the calculation of high-performance machine learning modules based on WebGL through gpu, thus enabling our front-end developers to carry out machine learning and train neural networks in browsers. The project code to be explained in this article is to simulate the data according to some rules, and then through machine learning and training, to deduce the formula function that generates these data according to these data.

# Basic concept

Next, let’s take five minutes to go through the basic concepts of tensorflow. This part mainly introduces some concepts. The author will use some analogies to briefly describe some concepts in order to help people understand them quickly. However, due to the limited energy and space, readers should refer to official documents more for detailed definition of concepts.

# Tensors

Tensor is actually an array, which can be one-dimensional or multi-dimensional. Tensor is a data unit in tensorflow.js

``````const tensors = tf.tensor([[1.0, 2.0, 3.0], [10.0, 20.0, 30.0]]);
tensors.print();``````

In the browser will be output: Tensorflow also provides semantically created tensor functions: tf.scalar (creating tensors of zero dimensions), tf.tensor1d (creating tensors of one dimension), tf.tensor2d (creating tensors of two dimensions), tf.tensor3d (creating tensors of three dimensions), tf.tensor4d (creating tensors of four dimensions), and tf.ones (all values in tensors are 1) or tf.zeros (all values in tensors are 0).

# Variable

Tensor is immutable, while variable is variable. Variables are initialized by tensors. The code is as follows:

``````const initialValues = tf.zeros();  //[0, 0, 0, 0, 0]
const biases = tf.variable(initialValues);  //Initialize variables through tensors
biases.print();  //Output [0,0,0,0,0,0]``````

# Operations

Tensors can be operated by operators, such as add, sub, mul, square, and mean.

``````const e = tf.tensor2d([[1.0, 2.0], [3.0, 4.0]]);
const f = tf.tensor2d([[5.0, 6.0], [7.0, 8.0]]);

e_plus_f.print();``````

The above example output: # Memory management (dispose and tf.tidy)

Dispose and tf.tidy are both used to empty the GPU cache, which means that when we write js code, we empty the cache by assigning null to this variable.

``````var a = {num: 1};
a = null;  //Clear cache``````

# dispose

Tensor and variables can be dispose to clear GPU cache:

``````const x = tf.tensor2d([[0.0, 2.0], [4.0, 6.0]]);
const x_squared = x.square();

x.dispose();
x_squared.dispose();
tf.tidy``````

When there are multiple tensors and variables, it is too troublesome to call dispose one by one, so with tf.tidy, putting tensor or variable operations in tf.tidy function will automatically optimize and clear the cache for us.

``````const average = tf.tidy(() => {
const y = tf.tensor1d([4.0, 3.0, 2.0, 1.0]);
const z = tf.ones();
return y.sub(z);
});

average.print()``````

The above example output: # simulated data

First of all, we will simulate a group of data. According to this cubic equation, some data with errors in the interval [-1, 1] are generated with parameters a=-0.8, b=-0.2, c=0.9, d=0.5. After data visualization, the following data are obtained: Assuming that we do not know the values of the four parameters A, B, C and D, we will use machine learning and machine training to reverse deduce the polynomial function equation and its four parameter values A, B, C and D through this pile of data.

# Set up variables

Because we have to deduce the four parameter values of A, B, C and D of polynomial equation in reverse direction, we must first define these four variables and assign some random numbers to them as initial values.

``````const a = tf.variable(tf.scalar(Math.random()));
const b = tf.variable(tf.scalar(Math.random()));
const c = tf.variable(tf.scalar(Math.random()));
const d = tf.variable(tf.scalar(Math.random()));``````

The above four lines of code, tf.scalar is to create a zero-dimensional tensor, tf.variable is to transform our tensor and initialize it into a variable variable. If we use javascript to understand it, the above four lines of code are equivalent to:

``````let a = Math.random();
let b = Math.random();
let c = Math.random();
let d = Math.random();``````

When we assign initial random values to the four parameter values of A, B, C and D, A = 0.513, B = 0.261, C = 0.259 and D = 0.504, we put these parameters into the equation to obtain the following graph: We can see that the data generated by merging the randomly generated four parameters A, B, C and D into polynomials is very different from the curve simulated by real data. This is what we will do next. Through machine learning and training, we will continuously adjust the four parameters A, B, C and D to make this curve as close to the actual data curve as possible.

# Create an optimizer

``````const learningRate = 0.5;
const optimizer = tf.train.sgd(learningRate);``````

LearningRate is a variable that defines the learning rate. During each machine training, the offset will be adjusted according to the learning rate. The lower the learning rate, the more accurate the predicted value will be. However, the response will increase the running time and calculation amount of the program. A high learning rate will speed up the learning process, but due to too large an offset, it is easy to swing up and down around the correct value, resulting in less accurate results.

Tf.train.SGD is an sgd optimizer that we have packaged for us in tensorflow.js, i.e. random gradient descent method. In machine learning algorithm, gradient descent method is usually used to train our algorithm. Gradient descent method usually has three forms BGD, SGD and MBGD.

We use SGD, a batch of gradient descent method, because whenever a training parameter is updated due to gradient descent, the speed of machine training will become very slow as the number of samples increases. Random gradient descent is proposed to solve this problem. Suppose the function of the general linear regression function is: SGD uses the loss function of each sample to calculate the partial derivative of θ to obtain the corresponding gradient to update θ: Random gradient descent is updated by iteration for each sample. Compared with the above batch gradient descent, all training samples are needed for one iteration. SGD iteration is more frequent and the search process in solution space looks blind. However, it is generally moving in the direction of the optimal value. The convergence diagram of random gradient descent is as follows: Training process functions
Writing the expected function model is actually to describe our function model with some columns of operations operations

``````function predict(x) {
// y = a * x ^ 3 + b * x ^ 2 + c * x + d
return tf.tidy(() => {
return a.mul(x.pow(tf.scalar(3, 'int32')))
});
}``````

A.mul(x.pow(tf.scalar(3, ‘int32’)) describes a.X 3 (a times the third power of x), b.mul(x.square ()) describes bX 2 (b times the square of x) and c mul (x) are the same. Note that tf.tidy is used to wrap up the predict function return, which is to facilitate memory management and optimize the memory of the machine training process.

# Define loss function

Next we will define a loss function using MSE (mean squared error). In mathematical statistics, the mean square error refers to the expected value of the square of the difference between the estimated value of the parameter and the true value of the parameter, which is recorded as MSE. MSE is a more convenient method to measure “average error”. MSE can evaluate the degree of data change. The smaller the value of MSE, the better the accuracy of the prediction model in describing experimental data. The calculation of MSE is very simple, that is, the square of the difference between the actual Y value and the predicted Y value is obtained according to a given X, and then the square of these differences is averaged. According to the above, our loss function code is as follows:

``````function loss(prediction, labels) {
const error = prediction.sub(labels).square().mean();
return error;
}``````

The expected value prediction minus the actual value labels, and then squared to find the average.

# Machine training

All right, so much has been said and so much preparation has been done, and finally the most critical step has been reached. The following code and function are the most important steps to truly calculate the desired results according to the data and then through machine learning and training. We have already defined the optimizer based on SGD random gradient descent, and then we have also defined the loss function based on MSE mean square error. How should we combine their two equipments for machine training and machine learning? Look at the following code.

``````const numIterations = 75;

async function train(xs, ys, numIterations) {
for (let iter = 0;   iter < numIterations;  iter++) {
optimizer.minimize(() => {
const pred = predict(xs);
//Loss function: MSE mean square error
return loss(pred, ys);
});
//Prevent browser blocking
await tf.nextFrame();
}
}``````

We defined a numIterations = 75 on the outer layer, which means we have to do 75 machine trainings. In each cycle, we call the optimizer.minimize function. it will continuously call SGD random gradient descent method to continuously update and revise our four parameters a, b, c, d, and will call our loss function based on MSE mean square error to reduce the loss every return. After 75 times of machine training and machine learning, plus SGD random gradient descent optimizer and loss loss function calibration, finally four parameters A, B, C and D, which are very close to the correct values, will be obtained.

We have noticed that this function has a line tf.nextFrame () at the end. This function is to solve the problem that a large number of machine operations will be carried out in the process of machine training and machine learning, which will block the browser and cause the ui to be unable to update.

We call this machine-trained function train:

``````import {generateData} from './data';  //this document is in git warehouse.
const trainingData = generateData(100, {a: -.8, b: -.2, c: .9, d: .5});
await train(trainingData.xs, trainingData.ys, 75);``````

After calling the train function, we can get four parameters: a, b, c and d.

``````console.log('a', a.dataSync());
console.log('b', b.dataSync());
console.log('c', c.dataSync());
console.log('d', d.dataSync());``````

The final values are a =-0.564, b =-0.207, c = 0.824, d = 0.590, which are very close to the actual values a=-0.8, b=-0.2, c=0.9, d=0.5 previously defined by us. The comparison chart is as follows: # Project operation and installation

The code installation and operation steps involved in this article are as follows:

``````git clone  https://github.com/tensorflow/tfjs-examples
cd tfjs-examples/polynomial-regression-core
yarn
yarn watch``````

There are many items in the official example of tensorflow.js. The example of polynomial-regression-core (polynomial equation regression recovery) is the code that we focus on in this article. I did not run well in the installation process. Every time I run, I will report the error of missing modules. The reader only needs to install the missing modules one by one according to the error report, and then go to google to search for the corresponding solution according to the error prompt information. Finally, it can run.

# Conclusion

Bb did not want to write the conclusion after reading so much, but I still want to express a funny and absurd idea in my heart. Why am I interested in this example of artificial intelligence? It is because in my hometown in Guangxi (a remote mountain village), there are feudal superstitions. I often take a person’s birthdates to calculate and tell his life’s fate. Balabalabala said a lot. I always scoff at these customs. But, but, but. . . . The absurd thing came. My father-in-law broke a leg in a car accident ten years ago. A few years ago he took his daughter-in-law and father-in-law back to his hometown to see relatives. The father-in-law thought these feudal superstitions of southerners were very funny, so he took his own birthdates and calculated it for the old man in the country. As a result, the old man said a lot and gave the exact date of the car accident to the approximate time of the afternoon of that day. . . . . This. . . . This is fun. . . The scene of the sudden silence of the whole air in those days is still vivid in my mind. This matter has always been in my mind. After all, I have never believed in these ghosts and good things. I have always believed that science is the only truth that takes us to fly in the supreme zone. However. . . I really don’t know how to evaluate it because of this. . . .

Huh? What does this have to do with artificial intelligence? I’m just thinking about whether birthdates of each of us is the dimension in Cartesian plane coordinate system, or birthdates is the A, B, C, D, E coefficients of polynomial functions, and whether there is really a polynomial function equation that can connect these birthdates coefficients together to obtain a formula that can describe your life, record your past, and predict your future. . . . . . Can we find our own corresponding dimensions and connect with what happened, then use artificial intelligence to machine learn and train a function that belongs to our own life fate trajectory. . . . Let’s not talk about it. I also feel sorry for you readers when they see here. Read well and forget what I said.

The above-mentioned views are purely personal ones. It is time to move bricks and bricks, and to bring baby with baby. I wish you all the best in front of you as soon as possible. ! ^_^! 