[Filtering tracking] Matlab source code for track filtering tracking based on EKF, UPF, PF, EPF, UPF multiple Kalman filters

[Filtering tracking] Matlab source code for track filtering tracking based on EKF, UPF, PF, EPF, UPF multiple Kalman filters

1. Purpose of using Kalman filter

      Let's assume that you have built a small robot that can walk in the woods. In order to navigate accurately, the robot needs to know its own location at all times.

      We use symbols to represent the state variables of the robot. Here we assume that the state variables only contain the position and speed of the robot:

      , , , ( , , ).

      , , GPS. GPSD 10m, , , .

      , ( ): , . , ( ): , . .

      ,GPS , GPS . GPS .

      , , ? , .


      , :

      , , . , :


      ( ) . u , . :

      , , , .

      When the position and speed variables are related, the schematic diagram is shown below. As shown in the figure, the possibility (probability) of observing a certain position is affected by the current robot speed.

      The situation described in the figure above may occur in this situation. For example, currently we use the position of the previous moment to predict the position of the robot at the next moment. If the robot runs at a higher speed, its moving distance may also change. Large, resulting in a larger distance change in the calculated position. And vice versa.

      This relationship is very important, because it will tell us more about the information: the state of a variable will reflect some states of other variables. This is also the purpose of the Kalman filter, that is, we hope to have as much as possible Get as much useful information as possible from uncertain information.

      In the subsequent calculation process, we use the covariance matrix to represent and describe this correlation. That is, ij describes the correlation between the i-th state variable and the j-th state variable. The covariance matrix is usually symbolized means, the elements of which are expressed as ij.

3. Express the problem in a matrix

      Now we use Gauss to express the state quantity (position and speed), so we need to define the state of the k robot at the current moment by two variables:

            1. Estimated value-the mean value of each state quantity, which can also be represented by the symbol u;

            2. Covariance matrix Pk;

      Here we only use the position and speed of the robot to represent the state quantity, but in actual operations and calculations, the definition of the state matrix can include any other useful variables.

      Now, we need to understand the current state (t = k-1) and the predicted state (t = k) at the next moment. What we need to pay attention to is that although we cannot clearly indicate which situation in the current distribution is The current real state, but we can still get the distribution at the new moment through the prediction equation. Because the prediction equation does not only work on a specific state quantity, but on all the conditions contained in the current distribution.

      In the figure below, we use the matrix Fk to represent the prediction step:

      Now think about it, how do we use a matrix to predict the position and speed of the robot at the next moment? Here, we will use the kinematic equation shown below to make predictions:

      Converted to matrix form:

      Obviously, this is a uniform motion model, the position of the new moment P_new = P_previous + time*v. Now we have obtained the prediction matrix (or called the state transition matrix), but we still don't know how to update the covariance matrix.

      In order to update the covariance matrix, we need a new formula:

      This formula means that by multiplying all the situations (points) contained in the current distribution by the prediction matrix, the updated or covariance matrix can be obtained.

      The above content can be summarized as the following two formulas:

4. External influences

      Even after the calculation, we still have not included all the information. Because there are some variables in the system that do not change with the change of the state quantity-some factors of the external world can also affect the system.

      Taking train operation as an example, the operator of the train may control the operation switch to make the train accelerate. Similarly, in the case of this robot, the robot navigation system may issue instructions to make the robot drive wheels to turn or stop. Therefore, in the calculation, we It is necessary to consider the impact of such variables on the system. Generally speaking, we call such variables as the control variables of the system and use symbols to express them.

      Now we assume that for the case of the robot, it is known that its external control quantity is acceleration a (controlled by a motion switch or control command), then the previously described operating equation can be updated as:

      The matrix form is:

      Where Bk is the control matrix, and is the matrix (nx 1) composed of control variables. The control variables in the system are only added to the equation when they are known or available, and are not indispensable.

      Now let us think about another question, if our prediction is not a 100% accurate model, what will happen?

5. External uncertainty

      If our variables are calculated based on the properties of the system itself and known external influences, then the state calculation will not be too problematic. But what if we can't get their values explicitly for the quantities that affect the system?

      Still taking our robot as an example, the driving wheel may slip during the movement, and the robot may also hit the bumps on the ground (the ground is uneven) and slow down. It is difficult for us to express or track the impact of these factors. Once these situations occur, our predicted results are likely to deviate greatly from the actual results, because we did not take these factors into account in the mathematical model.

      But don't worry, there is a mathematical solution to this situation. We add new uncertainties in each prediction step to indicate the impact of variables that exist in the world but we cannot clearly express.

      Each state in the original estimate will move into a new state range through the transformation equation.

      As shown in the figure below, each state point (indicated by the blue area) in the state at the previous moment will move to a new area range (the range is newly added with uncertainty, represented by a green circle). , We regard the effects that cannot be represented in the world and will bring uncertainty to the state variables as noise (in KF, it is usually white noise, and the covariance is represented by).

      Due to the addition of additional noise covariance, it is obvious that the distribution area in this case will be significantly different compared to the distribution area generated in the previous derivation.      

      Now, let us add the new noise covariance to the formula derived earlier:

      These two formulas represent the prediction part of the Kalman filter and are the first two of the five basic formulas of the Kalman filter. The estimated value is the system obtained based on the system state at the previous moment and the system control amount at the current moment The estimated value, also called a priori estimated value, is the mean value of the Gaussian distribution of each variable. The covariance matrix represents the uncertainty, which is composed of the covariance matrix at the previous moment and the covariance of the external noise. Calculated.

      Ok, now we have got the predicted (estimated) value of the system. But in a real system, we can often get some measurement values that reflect the state of the system through sensors.

6. Measured value (observed value)

      Usually we install some sensors on the robot, and the return value (measurement) of these sensors can let us know more information about the current state of the robot. For example, we now have two sensors, one returns position information and the other returns speed Information. These two sensors can indirectly provide some information about the robot's motion state (sensor operate on a state and produce a set of readings).

      It should be noted that the unit and scale of the information returned by the sensor may be different from the unit and scale of the state quantity we track in the prediction step. Therefore, we usually use the matrix Hk to represent the state quantity and the measurement quantity observed by the sensor. The relationship between.

      Usually, we can calculate the distribution of the sensor's return value (through the parameters and other information in the sensor's factory instructions):

      It should be noted that one of the reasons why the Kalman filter is so useful is that it can handle the noise of the sensor very well. That is to say, the observed value (or measured value) of the sensor has a certain degree of unreliability (by Caused by noise), so the sensor reading is not an accurate value, but a range with an uncertainty area.

      Through sensor observation, we can guess the current state of our robot. But because of the uncertainty, some states are more likely to be considered as sensor readings than others.

      In the Kalman filter, we use symbols to represent the covariance of the sensor's uncertainty due to noise. The mean value of the sensor's observations is equal to the sensor's reading value, which is usually used for representation.

      Therefore, we now have two different Gaussian distributions: one representing the prediction step and one representing the sensor's observations.

      We now need to adjust the distribution of sensor readings based on the prediction step (pink) and sensor readings (green) (here is the Kalman gain that will be mentioned later in the explanation).

      So which one is the most likely current state situation now? For a certain set of sensor readings (Z1, Z2), we now have two possibilities:

            1. The current sensor measurement is wrong and cannot represent the state of the robot well (the current state of the robot should be exactly the same as the distribution of the prediction step);

            2. The current sensor measurement is correct, which can well represent the current state of the robot (the current state of the robot should be exactly the same as the sensor observation distribution);

      If we now assume that these two possible lines are correct, and we multiply their distributions, we can get a new distribution as shown below:

        After multiplying, the remaining area is a high volume part. In this area, both hypotheses are high volume. This means that this newly generated distribution area is more accurate than any of the previous assumptions (full confidence in the prediction step And completely believe in sensor observations), so this area is the best estimate of the current state of the robot (which can be represented by the figure below).

      The above statement proves that when you multiply two Gaussian distributions with independent mean and covariance matrices (seperate means and covariance matrics), you can get a Gaussian distribution with new mean and covariance matrices. Maybe. You can find that there must be a mathematical expression that can calculate the new parameter value from the old parameter value.

7. Combining Gaussian

      Now let's find this mathematical formula together. For ease of understanding, we first derive from the one-dimensional situation. The one-dimensional Gaussian distribution can be expressed as:

      Now we want to know what happens when we talk about multiplying two Gaussian curves (distributions). The blue part in the figure below represents the result of the new Gaussian distribution after multiplying.

      Now we bring the formula of the first-order Gaussian distribution into the above formula:

      Now let us rewrite the above formula as an expression in matrix form, and let be the covariance matrix of Gaussian distribution:

      K is called Kalman gain, we will use it later.

8.  Integration

      We have two distributions: the distribution of the prediction step is, and the distribution of the sensor observations is. We bring these two distributions into the formula:

      The overlapping area can be solved:

      Therefore, the Kalman gain is:

      In the above derivation process, we can find that each variable is multiplied by the variable, so it can be reduced, and found that it can be reduced when calculating the covariance matrix. Therefore, the simplified formula is:

      In the Kalman filter, we call these three formulas the update step, which is also the last three of the five basic formulas of the Kalman filter.

      The estimated value here is the last optimal estimated value of the robot at the current moment, and is the most estimated covariance matrix of the robot at the current moment. These two variables will be regarded as the last output value at the current moment and enter the next Karl Mann filter calculation process.

      The schematic diagram of the Kalman filter workflow is roughly as follows:

9. Summary

      The Kalman filter budget is divided into two steps: prediction and update. There are five basic equations:

            1. Prediction steps:

            2. Update steps     

      It should be noted that the Kalman filter is only suitable for linear systems. In the real world, the systems we face are usually nonlinear systems, so the extended Kalman filter is more commonly used in engineering. Extended Kalman filter and Kalman filter Very similar, the only difference is that with the extended Kalman filter, the system needs to be linearized (Jacobi matrix).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%% %% main function %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% % Function description: Comprehensive comparison program of ekf, ukf, pf, epf, upf algorithm rand('seed',3); randn('seed',6); %error('The parameter T below, please refer to the value settings in the book, and then delete this line of code') T = 60; R = 1e-5; g1 = 3; g2 = 2; X = zeros(1,T); Z = zeros(1,T); processNoise = zeros(T,1); measureNoise = zeros(T,1); X(1) = 1; P0 = 3/4; Qekf=10*3/4; Rekf=1e-1; Xekf=zeros(1,T); Pekf = P0*ones(1,T); Tekf=zeros(1,T); Qukf=2*3/4; Rukf=1e-1; Xukf=zeros(1,T); Pukf = P0*ones(1,T); Tukf=zeros(1,T); N=200; Xpf=zeros(1,T); Xpfset=ones(T,N); Tpf=zeros(1,T); Xepf=zeros(1,T); Xepfset=ones(T,N); Pepf = P0*ones(T,N); Tepf=zeros(1,T); Xupf=zeros(1,T); Xupfset=ones(T,N); Pupf = P0*ones(T,N); Tupf=zeros(1,T); for t=2:T processNoise(t) = gengamma(g1,g2); measureNoise(t) = sqrt(R)*randn; X(t) = feval('ffun',X(t-1),t) +processNoise(t); Z(t) = feval('hfun',X(t),t) + measureNoise(t); tic [Xekf(t),Pekf(t)]=ekf(Xekf(t-1),Z(t),Pekf(t-1),t,Qekf,Rekf); Tekf(t)=toc; tic [Xukf(t),Pukf(t)]=ukf(Xukf(t-1),Z(t),Pukf(t-1),Qukf,Rukf,t); Tukf(t)=toc; tic [Xpf(t),Xpfset(t,:)]=pf(Xpfset(t-1,:),Z(t),N,t,R,g1,g2); Tpf(t)=toc; tic [Xepf(t),Xepfset(t,:),Pepf(t,:)]=epf(Xepfset(t-1,:),Z(t),t,Pepf(t-1,:),N, R, Qekf, Rekf, g1, g2); Tepf(t)=toc; tic [Xupf(t),Xupfset(t,:),Pupf(t,:)]=upf(Xupfset(t-1,:),Z(t),t,Pupf(t-1,:),N, R, Qukf, Rukf, g1, g2); Tupf(t)=toc; end; ErrorEkf=abs(Xekf-X); ErrorUkf=abs(Xukf-X); ErrorPf=abs(Xpf-X); ErrorEpf=abs(Xepf-X); ErrorUpf=abs(Xupf-X); figure hold on;box on; p1=plot(1:T,X,'-k.','lineWidth',2); %p2=plot(1:T,Xekf,'m:','lineWidth',2); %p3=plot(1:T,Xukf,'--','lineWidth',2); p4=plot(1:T,Xpf,'-ro','lineWidth',2); p5=plot(1:T,Xepf,'-g*','lineWidth',2); p6=plot(1:T,Xupf,'-b^','lineWidth',2); legend([p1,p4,p5,p6],'true state','PF estimation','EPF estimation','UPF estimation') xlabel('Time','fontsize',10) title('Filter estimates (posterior means) vs. True state','fontsize',10) figure hold on;box on; p1=plot(1:T,ErrorEkf,'-k.','lineWidth',2); p2=plot(1:T,ErrorUkf,'-m^','lineWidth',2); p3=plot(1:T,ErrorPf,'-ro','lineWidth',2); p4=plot(1:T,ErrorEpf,'-g*','lineWidth',2); p5=plot(1:T,ErrorUpf,'-bd','lineWidth',2); legend([p1,p2,p3,p4,p5],'EKF deviation','UKF deviation','PF deviation','EPF deviation','UPF deviation') figure hold on;box on; %p1=plot(1:T,Tekf,'-k.','lineWidth',2); %p2=plot(1:T,Tukf,'-m^','lineWidth',2); p3=plot(1:T,Tpf,'-ro','lineWidth',2); p4=plot(1:T,Tepf+0.02,'k:o','lineWidth',2); p5=plot(1:T,Tupf,'-bo','lineWidth',2); p6=plot(1:T,Tepf+0.015,'-g*','lineWidth',2); legend([p3,p4,p6,p5],'PF','DCS-UPF-X','DCS-UPF-Y','UPF') xlabel('Time','fontsize',10) ylabel('Single step running time','fontsize',10) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% Copy code