# [Prediction model] Based on matlab GUI BP neural network steel corrosion rate prediction [including Matlab source code 107]

## 1. Introduction

1 Overview
BP (Back Propagation) neural network was proposed in 1986 by a scientific research group led by Rumelhart and McCelland. See their paper Learning representations by back-propagating errors published on Nature.

BP neural network is a multi-layer feedforward network trained by error back propagation algorithm, and it is one of the most widely used neural network models. The BP network can learn and store a large number of input-output pattern mapping relationships without revealing the mathematical equations describing this mapping relationship in advance. Its learning rule is to use the steepest descent method to continuously adjust the weights and thresholds of the network through backpropagation to minimize the sum of squared errors of the network.

2 The basic idea
of the BP algorithm Last time we said that the multilayer perceptron encountered a bottleneck in how to obtain the weight of the hidden layer. Since we cannot directly obtain the weight of the hidden layer, can we indirectly adjust the weight of the hidden layer by obtaining the error between the output result and the expected output through the output layer? The BP algorithm is an algorithm designed with this idea. Its basic idea is that the learning process consists of two processes: the forward propagation of the signal and the backward propagation of the error.
In the forward propagation, the input samples are passed in from the input layer, processed by each hidden layer layer by layer, and then passed to the output layer. If the actual output of the output layer is inconsistent with the expected output (teacher signal), it will enter the stage of error back propagation.
When backpropagating, the output is transmitted back to the input layer layer by layer through the hidden layer in some form, and the error is allocated to all the units of each layer, so as to obtain the error signal of each layer unit, and this error signal is used as a correction for each unit The basis of the weight.
The specific procedures of these two processes will be introduced later.

The signal flow diagram of the BP algorithm is shown in the figure below.

3 BP network characteristic analysis-BP three elements When
we analyze an ANN, we usually start with its three elements, namely
1) network topology;
2) transfer function;
3) Learning algorithm.

The characteristics of each element add up to determine the functional characteristics of this ANN. Therefore, we also start the research on BP network from these three elements.
3.1 Topological structure
of BP network As I said last time, BP network is actually a multilayer perceptron, so its topology is the same as that of a multilayer perceptron. Since single-hidden-layer (three-layer) perceptrons have been able to solve simple nonlinear problems, they are the most widely used. The topology of the three-layer perceptron is shown in the figure below.
One of the simplest three-layer BP:

3.2 Transfer function of
BP network The transfer function used by BP network is a nonlinear transformation function-Sigmoid function (also known as S function). Its characteristic is that the function itself and its derivative are continuous, so it is very convenient in processing. Why choose this function, we will introduce further when we introduce the learning algorithm of BP network.
The unipolar sigmoid function curve is shown in the figure below.

The bipolar sigmoid function curve is shown in the figure below.

3.3 The learning algorithm of BP network The learning algorithm of
BP network is BP algorithm, also called algorithm (we will find many terms with multiple names in the learning process of ANN). Take the three-layer perceptron as an example, when the network outputs When it is not equal to the expected output, there is an output error E, which is defined as follows

Next, we will introduce the specific process of BP network learning and training.

4 Training decomposition of BP network
Training a BP neural network is actually adjusting the two parameters of network weight and bias. The training process of BP neural network is divided into two parts:

Forward transmission, the output value is transmitted in a wave-by-layer manner;
reverse feedback, the weight and bias are adjusted layer by layer in the reverse direction;
let s look at the forward transmission first.
Forward transmission (Feed-Forward forward feedback)
Before training the network, we need to initialize the weights and biases randomly, and take one of [ 1, 1] [-1,1][ 1,1] for each weight Random real number, each offset takes a random real number of [0,1][0,1][0,1], and then it starts forward transmission.

The training of the neural network is completed by multiple iterations. Each iteration uses all the records of the training set, and each training network uses only one record. The abstract description is as follows:

```While termination conditions are not met:
for record:dataset:
trainModel(record)
Copy code```

4.1 Backpropagation

4.2 Training termination conditions
Each round of training uses all the records of the data set, but when to stop, there are two stopping conditions:
set the maximum number of iterations, for example, use the data set to iterate 100 times and then stop the training
calculation The prediction accuracy of the training set on the network, and stop training after reaching a certain threshold

5 The specific process of BP network operation
5.1 Network structure The
input layer has n nn neurons, the hidden layer has p pp neurons, and the output layer has q qq neurons.
5.2 Variable definition

Step 9: Judge the rationality of the model
Judge whether the network error meets the requirements.
When the error reaches the preset accuracy or the number of learning times is greater than the maximum number of times designed, the algorithm ends.
Otherwise, select the next learning sample and the corresponding output expectation, return to the third part, and enter the next round of learning.

6 Design of
BP network In the design of BP network, it should generally be considered from the number of layers of the network, the number of neurons in each layer and the activation function, the initial value and the learning rate. The following are some selections in principle.
6.1 The layer number
theory of the network has proved that a network with bias and at least one S-type hidden layer plus a linear output layer can approximate any rational function. Increasing the number of layers can further reduce errors and improve accuracy, but it also complicates the network. . In addition, a single-layer network with only a nonlinear activation function cannot be used to solve the problem, because the problems that can be solved with a single-layer network can certainly be solved with an adaptive linear network, and the calculation speed of the adaptive linear network is faster. The problem that can only be solved with a nonlinear function, the single-layer accuracy is not high enough, and the desired result can only be achieved by increasing the number of layers.
6.2 Number of hidden layer neurons
The improvement of network training accuracy can be achieved by using a hidden layer and increasing the number of neurons, which is much simpler in structure realization than increasing the number of network layers. Generally speaking, we use the accuracy and training time to quantify the quality of a neural network design:
(1) When the number of neurons is too small, the network cannot learn well, the number of training iterations is relatively large, and the training accuracy is not high.
(2) When the number of neurons is too large, the function of the network is stronger, the accuracy is higher, and the number of training iterations is larger, and over fitting may occur.
From this, we get the principle of selecting the number of neurons in the hidden layer of the neural network: on the premise that the problem can be solved, add one or two neurons to speed up the error reduction speed.

6.3 Selection of initial weights
initial weights Generally, initial weights are random numbers with values between ( 1,1). In addition, after analyzing how a two-layer network trains a function, Wedlow et al. proposed a strategy of choosing the initial weight level of s r, where r is the number of inputs and s is the number of neurons in the first layer. number.

6.4 Learning rate
learning rate is generally selected as 0.01-0.8. A large learning rate may lead to instability of the system, but a small learning rate will cause the convergence to be too slow and require a longer training time. For a more complex network, different learning rates may be required at different positions on the error surface. In order to reduce the number and time of training to find the learning rate, a more appropriate method is to use a variable adaptive learning rate to set the network at different stages Different sizes of learning rate.

6.5 Selection of expected error
In the process of designing the network, the expected error value should also be determined by comparing and training a suitable value. This suitable value is determined relative to the number of hidden layer nodes required. Generally, two networks with different expected error values can be trained at the same time, and finally one of the networks can be determined by comprehensive factors.

7 Limitations of
BP network BP network has the following problems:

(1) Longer training time is required: This is mainly caused by too small learning rate, which can be improved by using a varying or adaptive learning rate.
(2) Cannot train at all: This is mainly manifested in the paralysis of the network. Usually, in order to avoid this situation, one is to choose a smaller initial weight, but to use a smaller learning rate.
(3) Local minimum: The gradient descent method used here may converge to a local minimum, and better results may be obtained by using a multilayer network or more neurons.

8 Improvement of BP network
main goal of P algorithm is to speed up the training speed, avoid falling into local minima, etc. Common improvement methods include momentum factor algorithm, adaptive learning rate, changing learning rate, and function function shrinking method, etc. . The basic idea of the momentum factor method is to add a value proportional to the previous weight change on the basis of back propagation, and to generate a new weight change according to the back propagation method. . The adaptive learning rate method is aimed at some specific problems. The principle of the method of changing the learning rate is that if the sign of the reciprocal of a weight of the objective function is the same in several successive iterations, the learning rate of this weight will increase, and on the contrary, if the sign is opposite, its learning rate will be reduced. The law of shrinking action function is to translate the action function, that is, add a constant.

## 2. the source code

```function varargout = main(varargin)
% MAIN MATLAB code for main.fig
% MAIN, by itself, creates a new MAIN or raises the existing
% singleton*.
%
% H = MAIN returns the handle to a new MAIN or the handle to
% the existing singleton*.
%
% MAIN( 'CALLBACK' ,hObject,eventData,handles,...) calls the local
% function named CALLBACK in MAIN.M with the given input arguments.
%
% MAIN( 'Property' , 'Value' ,...) creates a new MAIN or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before main_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to main_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE ' s Tools menu. Choose"GUI allows only one
% instance to run (singleton)" .
%

% Edit the above text to modify the response to help main

% Last Modified by GUIDE v2 .5  26 -Apr -2019  17 : 16 : 49

% Begin initialization code-DO NOT EDIT
gui_Singleton = 1 ;
gui_State = struct( 'gui_Name' , mfilename, ...
'gui_Singleton' , gui_Singleton, ...
'gui_OpeningFcn' , @main_OpeningFcn, ...
'gui_OutputFcn' , @main_OutputFcn, ...
'gui_LayoutFcn' , [] ,. ..
'gui_Callback' , []);
if nargin && ischar (varargin{ 1 })
gui_State.gui_Callback = str2func(varargin{ 1 });
end

if nargout
[varargout{ 1 :nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code-DO NOT EDIT

% --- Executes just before main is made visible.
function main_OpeningFcn (hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)
% varargin command line arguments to main  (see VARARGIN)

% Choose default command line output for main
handles.output = hObject;
% delete (allchild(handles.axes1));
global color wor;
color=[ 0.94  0.94  0.94 ];
haxes=axes( 'visible' , 'off' , 'units' , 'normalized' , 'position' ,[ 0  0  1  1 ]);
show=image(img);
axis off;
hold on
wor=[];
wor.FontName = 'New Song Ti' ;
wor.FontWeight = 'bold' ;
wor.FontAngle = 'normal' ;
wor.FontUnits = 'points' ;
wor.FontSize = 30 ;
w1=text( 210 , 140 , 'Prediction of steel corrosion rate based on BP neural network' , 'FontName' , 'New Song Ti' ,...
'FontWeight' ,wor.FontWeight, 'FontAngle' ,wor.FontAngle, ' FontUnits' ,wor.FontUnits,...
'FontSize' ,wor.FontSize, 'color' , 'r' );
w2=text( 520 , 560 , 'Changan University' , 'FontName' , 'New Song Ti' ,...
'FontWeight' ,wor.FontWeight, 'FontAngle' ,wor.FontAngle, 'FontUnits' ,wor.FontUnits,. ..
'FontSize' ,wor.FontSize, 'color' , 'r' );
w3=text( 120 , 220 ,{ 'Prediction of steel corrosion rate based on BP neural network' },...
'FontName' , 'Times New Roman' ,...
'FontWeight' , 'normal' , 'FontAngle' ,wor.FontAngle, 'FontUnits' ,wor.FontUnits,...
'FontSize' ,wor.FontSize -5 , 'color' , 'r' );
w4=text( 450 , 650 , 'Chang'an University' , 'FontName' , 'Times New Roman' ,...
'FontWeight' , 'normal' , 'FontAngle' ,wor.FontAngle, 'FontUnits' ,wor. FontUnits,...
'FontSize' ,wor.FontSize -5 , 'color' , 'r' );
handles.w1=w1;
handles.w2=w2;
handles.w3=w3;
handles.w4=w4;
handles.haxes=haxes;
handles.show=show;
p1=[handles.w1 handles.w2 handles.w3 handles.w4 handles.haxes handles.show];
% p2=[handles.axes1 handles.text1 handles.text2 handles.text3];
% p3=[handles.text4 handles.text5 handles.text6 handles.text7 handles.text8];
set (p1, 'Visible' , 'on' );
axis off;
% set ([p2,p3], 'Visible' , 'off' );
% Update handles structure
guidata (hObject, handles) ;

% UIWAIT makes main wait for user response  (see UIRESUME)
% uiwait ( handles.figure1 ) ;

% --- Outputs from this function are returned to the command line.
function varargout = main_OutputFcn(hObject, eventdata, handles)
% varargout cell array  for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)

% Get default command line output from handles structure
varargout { 1 } = handles.output ;

% ------------------------------------------------- -------------------
function Untitled_1_Callback (hObject, eventdata, handles)
% hObject handle to Untitled_1  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)

% ------------------------------------------------- -------------------
function Untitled_8_Callback (hObject, eventdata, handles)
% hObject handle to Untitled_8  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)
winopen ( 'samples1.xls' ) ;

% ------------------------------------------------- -------------------
function bangzhu_Callback (hObject, eventdata , handles)
% hObject handle to bangzhu  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)

% ------------------------------------------------- -------------------
function banbenxinxi_Callback (hObject, eventdata , handles)
% hObject handle to banbenxinxi  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)
fushilv_version () ;

% ------------------------------------------------- -------------------
function caozuozhidao_Callback (hObject, eventdata , handles)
% hObject handle to caozuozhidao  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)
open ( 'help.docx' ) ;

% ------------------------------------------------- -------------------
function Untitled_2_Callback (hObject, eventdata, handles)
% hObject handle to Untitled_2  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)
moxing () ;

% ------------------------------------------------- -------------------
function beijingzhaopian_Callback (hObject, eventdata , handles)
% hObject handle to beijingzhaopian  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)
[fname,pname,index] =uigetfile({ '*.jpg' }, 'Select picture' );
if index
str=[pname fname];
image(c);
axis off;
end
axis off;
hold on
wor=[];
wor.FontName = 'New Song Ti' ;
wor.FontWeight = 'bold' ;
wor.FontAngle = 'normal' ;
wor.FontUnits = 'points' ;
wor.FontSize = 30 ;
w1=text( 210 , 140 , 'Prediction of steel corrosion rate based on BP neural network' , 'FontName' , 'New Song Ti' ,...
'FontWeight' ,wor.FontWeight, 'FontAngle' ,wor.FontAngle, ' FontUnits' ,wor.FontUnits,...
'FontSize' ,wor.FontSize, 'color' , 'r' );
w2=text( 520 , 560 , 'Changan University' , 'FontName' , 'New Song Ti' ,...
'FontWeight' ,wor.FontWeight, 'FontAngle' ,wor.FontAngle, 'FontUnits' ,wor.FontUnits,. ..
'FontSize' ,wor.FontSize, 'color' , 'r' );
w3=text( 120 , 220 ,{ 'Prediction of steel corrosion rate based on BP neural network' },...
'FontName' , 'Times New Roman' ,...
'FontWeight' , 'normal' , 'FontAngle' ,wor.FontAngle, 'FontUnits' ,wor.FontUnits,...
'FontSize' ,wor.FontSize -5 , 'color' , 'r' );
w4=text( 450 , 650 , 'Chang'an University' , 'FontName' , 'Times New Roman' ,...
'FontWeight' , 'normal' , 'FontAngle' ,wor.FontAngle, 'FontUnits' ,wor. FontUnits,...
'FontSize' ,wor.FontSize -5 , 'color' , 'r' );handles.w1=w1;
handles.w2=w2;
handles.w3=w3;
handles.w4=w4;
p1=[handles.w1 handles.w2 handles.w3 handles.w4 ];
set (p1, 'Visible' , 'on' );
axis off;

% ------------------------------------------------- -------------------
function Untitled_7_Callback (hObject, eventdata, handles)
% hObject handle to Untitled_7  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)
close (gcf) ;

% --- Executes when figure1 is resized.
function figure1_ResizeFcn (hObject, EVENTDATA, Handles)
% hObject handle to figure1 The  (See GCBO)
% EVENTDATA Reserved - A to BE defined in Future Version of the MATLAB
% handles structure with handles and user data  (see GUIDATA)

% --- Executes on button press in pushbutton1.
function pushbutton1_Callback (hObject, eventdata, handles)
% hObject handle to pushbutton1  (see GCBO)
% eventdata reserved-to be defined in a future version of MATLAB
% handles structure with handles and user data  (see GUIDATA)
close (gcf) ;
main_windows();
Copy code```

Version: 2014a