The research of artificial neural network and computer develops almost synchronously. 1943, psychologist mcculloch and mathematician Pitts put forward the mathematical model of formal neurons. In the late 1950s, Rosenblat put forward the perceptron model. Hopfiled introduced the concept of energy function in 1982, and put forward the mathematical model of neural network. 1986 Rumelhart, LeCun and other scholars put forward multilayer perceptron.
With the efforts of many researchers, neural network technology is becoming more and more perfect in theory, and there are more and more kinds of algorithms. At present, there are many theoretical research results about neural network, and many books about basic theory have been published, which is still one of the hot spots in global nonlinear science research.
Neural network is an information processing system that simulates the neural structure of human brain to realize the intelligent activity function of human brain. It has the basic functions of the human brain, but it is not a true portrayal of the human brain. It is an abstract, simplified and simulated model of the human brain, so it is called artificial neural network (Bian, 2000).
Artificial neuron is the node of neural network and one of the most important components of neural network. At present, there are many models about neurons, and the most commonly used and simplest model is the model composed of threshold function and Sigmoid function (Figure 4-3).
Figure 4-3 Artificial Neurons and Two Common Output Functions
Neural network learning and recognition method was originally put forward by the learning and recognition process of human brain neurons. Input parameters, like signals received by neurons, are associated with neurons through a certain weight (equivalent to the intensity of stimulating nerve excitement). This process is somewhat similar to multiple linear regression, but the nonlinear characteristics of the simulation are reflected in the next step, that is, the excitation mode of neurons is determined by setting a threshold (the excitation limit of neurons), and the output result is obtained by output operation. After a large number of samples enter the network system for learning and training, the weight between the connection input signal and neurons is stable, which can satisfy the trained learning samples to the greatest extent. After confirming the rationality of the network structure and the high accuracy of the learning effect, the input parameters of the sample to be predicted are substituted into the network to achieve the purpose of parameter prediction.
4.2.2 Back Propagation Algorithm (BP Method)
So far, there are more than a dozen neural network models, such as feedforward neural network, perceptron, Hopfiled network, radial basis function network, back propagation algorithm (BP method) and so on. However, in the inversion of reservoir parameters, the more mature and popular network type is BP-ANN.
BP network is developed on the basis of feedforward neural network. There is always an input layer (including nodes corresponding to each input variable) and an output layer (including nodes corresponding to each output value), at least one hidden layer (also called intermediate layer), and the number of nodes is unlimited. In BP-ANN, the nodes in adjacent layers are connected by any initial weight, but the nodes in the same layer are not connected with each other. For BP-ANN, the basis functions of hidden layer and output layer nodes must be continuous and monotonically increasing. When the input tends to positive or negative infinity, it should be close to a fixed value, that is, the basis function is "S" (Kosko, 1992). The training of BP-ANN is a supervised learning process, which involves two data sets, namely training data set and supervised data set.
The process of providing a set of input information to the input layer of the network to produce the expected output at the output layer through the network is called network learning, or training the network, and the method to realize this step is called learning algorithm. The learning process of BP network includes two stages: the first stage is a forward process, and the output values of each unit are calculated layer by layer through the input layer and the hidden layer; The second stage is the backward propagation process, in which the error of each unit in the hidden layer is calculated step by step from the output error, and the weight of the upper layer is corrected by this error. Error information is transmitted back through the network, and the weights are adjusted according to the principle of gradually decreasing errors until satisfactory output is achieved. After network learning, a set of suitable and stable weights are fixed, and the samples to be predicted are used as input layer parameters. After the network propagates forward, the output results can be obtained, which is the prediction of the network.
The main steps of the back propagation algorithm are as follows: first, select the initial value of the weight coefficient, and then repeat the following process until convergence (calculate each sample in turn).
(1) Calculate the Oj of each cell from front to back.
Study and prediction of reservoir characteristics
(2) Calculate δj of the output layer
Study and prediction of reservoir characteristics
(3) Calculate Δ j of each hidden layer from back to front.
Study and prediction of reservoir characteristics
(4) calculating and saving each weight correction.
Study and prediction of reservoir characteristics
(5) correct the weight
Study and prediction of reservoir characteristics
The above algorithm is to correct the weight of each sample, or calculate δj for each sample and sum it, and correct the weight according to the total error.