The MATLAB Realization of Particle Swarm Algorithm for Finding the Maximum Value of Unary Function

The MATLAB Realization of Particle Swarm Algorithm for Finding the Maximum Value of Unary Function

In the previous essay that introduced the particle swarm algorithm to realize the grouping backpack in detail, the main idea of the particle swarm algorithm has been introduced in detail. If you have mastered how to use the particle swarm algorithm to realize the grouping backpack, you should modify it into a unary function to find the best The application of value is simply a piece of cake. Here is a copy of the general idea of using the particle swarm algorithm to achieve grouping backpacks summarized before:

  • A bunch of particles are randomly generated, and each particle represents a situation of the backpack (which 3 items are selected), and the initial particles are all locally optimal particles .
  • Calculate the fitness value of each particle (that is, the value, volume, and weight of the backpack represented by each particle), and then select non-inferior particles according to the fitness of each particle .
  • Then enter the iterative process. In each iteration, randomly select one of the non-inferior particles as the global optimal particle , and calculate the particle velocity according to the formula (closely related to the better particle and the global optimal particle), so that each particle moves (also That is, there is a probability to replace the representative item).
  • If the particles after moving particles than before and more preferably the particles to replace the original local optimal
  • Then the non-inferiority local optimum particle and particle put together, and then press the pros and cons of selected non-inferiority particles
  • Remove the repeated non-inferior particles and enter the next iteration.

Now make the following changes to the above thought for finding the minimum value of the unary function:

  • A bunch of particles are randomly generated, and each particle represents an abscissa. The initial particles are all locally optimal particles .
  • Calculate the fitness value of each particle (that is, the ordinate value represented by each particle), and then select the initial non-inferior particles according to the fitness of each particle .
  • Then enter the iterative process. In each iteration, randomly select one of the non-inferior particles as the global optimal particle , and calculate the particle velocity according to the formula (closely related to the better particle and the global optimal particle), so that each particle will move (also That is, the abscissa moves to the left or right).
  • If the particles after moving particles than before and more preferably the particles to replace the original local optimal
  • Then the non-inferiority local optimum particle and particle put together, and then press the pros and cons of selected non-inferiority particles
  • Remove the repeated non-inferior particles and enter the next iteration.

One: the problem of the most value of the unary function

Suppose there are the following functions:

y(x)=sin(x2) 2sin(x)+x sin(x)y(x) = sin(x^2)-2sin(x) + x * sin(x)

Draw the image of this function as shown in the figure:

How do we find the minimum value of the function between [-3, 9] ? First put a dynamic graph showing how the particle swarm finds the minimum value:

It can be seen from the figure that our husband has 50 particles, which are represented by blue dots and red dots for non-inferior particles (the last minimum point must be included in the non-inferior particles), but Here we can see that there is always only one non-inferior solution particle, so it is the minimum point we are seeking. It can be seen that the position of the particle will change in each iteration, and it will continue to move to the minimum position.

The following specifically describes how the particle swarm algorithm finds the most value of the unary function. Still follow the previous essay to decompose the complete MATLAB code into seven parts to explain.


2: The particle swarm algorithm to find the most value problem of the unary function

2.1 Input parameters, fixed parameter initialization

Note that the inertia weight value in the parameter can be modified to a value that you think is appropriate, it will affect the step length of each movement of the particle. The larger wmax is, the larger the particle motion step at the beginning of the iteration; the smaller the wmin, the smaller the particle motion step at the end of the iteration.

clear, clc, close all; %% input parameters, fixed parameter initialization xsize = 50 ; % number of particles ITER = 50 ; % number of iterations c1 = 0.8 ; c2 = 0.8 ; % constant wmax = 1.0 ; wmin = 0.01 ; % inertia weight related constant v = zeros ( 1 , the xsize); % particle velocity initialization copy the code

2.2 Initialization of particle swarm position, fitness, best position, and best fitness

Randomly generate particle swarmsxx , represents the abscissa of each particle, pay attention to the[a,b][a, b] uniformly distributed random numbers between, usea+(b a) rand(1,1)a + (b-a) * rand(1,1) This expression.

The fitness is actually the ordinate of the particle swarm.

%% particle swarm position, fitness, best position, best fitness initialization x = -3 + 12 * rand ( 1 , xsize); % random particle swarm position generation (representing the abscissa [-3, 9]) % Particle swarm fitness y = zeros ( 1 , xsize); % Particle swarm ordinate for i = 1 : xsize y( i ) = sin (x( i ).^ 2 ) -2 * sin (x( i )) + x( i ) * sin (x( i )); end X = bestx; % value particle swarm optimum position besty = y; best fitness% particle group duplicated code

2.3 Initial screening of non-inferior solutions

The non-inferior solutions are screened for the first time, and after each iteration, they are screened again. The judgment condition is very important and can be changed according to the limitation of the problem. Here is to judge whether each particle is more in line with the requirements than all other particles (the ordinate is smaller).

%% Initial screening non-inferior solution cnt = 1 ; for i = 1 : xsize fljflag = 1 ; for j = 1 : xsize if j ~= i if y( i )> y( j ) % i is an inferior solution fljflag = 0 ; break ; end end end if fljflag == 1 flj (cnt) = y ( I ); % Pareto fitness fljx (CNT) = X ( I ); % Pareto position CNT = CNT + . 1 ; End End copy the code

2.4 Particle motion calculation

The calculation of particle velocity still follows the following classic formula:

vi+1=wvi+c1r1(plocali xi)+c2r2(pglobal xi)v^{i+1} = wv^i + c1r1(p_{local}^i-x^i) + c2r2(p_{global}-x^i)

among themww is the inertia weight,c1c1 andc2c2 is a constant,r1r1 andr2r2 is a uniformly distributed random number between [0,1],plocalip_{local}^iIs the local optimal particle;pglobalp_{global}It is the global optimal particle. Note that there is only one global optimal, so there is no superscriptii .

The value of the inertia weight is related to the number of iterations, here we usew=wmax (wmax wmin) niter/iterallw = wmax-(wmax-wmin) * niter/iterall such a calculation method. The calculation of related inertial weights is also a hot spot in the research of particle swarm algorithm. The inertial weights change greatly, the particle speed is fast, and the position change is also fast. If the inertial weights are obtained, the particle swarm can converge to the global optimum faster!

When using speed to update the position of each particle, pay attention to a problem, that is, do not let the abscissa of the moving particle exceed our judgment interval[ 3,9][-3, 9] again.

for niter = 1 : ITER % The iteration starts, the particles start to move xx = [ -10 : 0.01 : 10 ]; % For drawing, yy = sin (xx.^ 2 ) -2 * sin (xx) + xx .* sin (xx ); plot (xx, yy, 'k' ); xlim([ -10 , 10 ]);ylim([ -8 , 10 ]); title( 'Particle Swarm Results' ); xlabel( 'x' ); ylabel( 'y' ); hold on; rnd = randi( length ( flj ), 1 , 1 ); gbest = fljx(rnd); % the global optimal solution of the particle %% Particle motion calculation w = wmax-(wmax-wmin) * niter/ITER; % inertia weight update r1 = rand ( 1 , 1 ); r2 = rand ( 1 , 1 ); % generates $[0, 1] $ Uniformly distributed random value for i = 1 : xsize v( i ) = w * v( i ) + c1 * r1 * (bestx( i )-x( i )) + c2 * r2 * (gbest-x( i )); % particle velocity x( i ) = x ( i ) + v( i ); if (x( i )> 9 || x( i ) < -3 ); % After the exercise, the range of x is exceeded, replace x( with a new random number i ) = -3 + 12 * rand ( 1 , 1 ); end endCopy code

2.5 Current particle swarm fitness, best position, best fitness

The particles have moved to a new position. Of course, the new particles must be compared with the old particles. If the fitness of the new particles is better than the old particles (the ordinate is smaller), the local optimal particle position is updated. Unlike the knapsack problem, which requires multiple judgment conditions to be considered, the judgment here is quite simple, regardless of other situations.

y_cur = zeros ( 1 , xsize); for i = 1 : xsize y_cur( i ) = sin (x( i ).^ 2 ) -2 * sin (x( i )) + x( i ) * sin (x( i )); end for i = 1 : xsize if y_cur( i ) <y( i ) % if the current particle fitness is better bestx( i ) = x( i ); % update the best position of the particle besty( i ) = y_cur( i ); % Particle's best fitness update end end y_cur = Y; % Old particles New particles become duplicated code

2.6 Combine the best position and best fitness of the particle swarm and then select non-inferior solutions

The steps for the first non-inferior solution screening are basically the same, except that a merge operation is added to merge the local best particles with the non-inferior solution particles, and then filter a wave of non-inferior solution particles.

%% The best position and best fitness of the particle swarm are combined and then filtered for non-inferior solutions bestxmerge = [bestx, fljx]; ymerge = [besty, flj]; n = length (flj); flj = []; fljx = []; cnt = 1 ; for i = 1 : xsize + n fljflag = 1 ; for j = 1 : xsize + n if j ~= i if ymerge( i )> ymerge( j ) % i is the inferior solution fljflag = 0 ; break ; end end end if fljflag == 1 flj (cnt) = ymerge( i ); % non-inferior solution fitness fljx(cnt) = bestxmerge( i ); % non-inferior solution position cnt = cnt + 1; End End copy the code

2.7 Remove repeated non-inferior solutions

This step is also very important, and there are many ways to implement it. Just choose one to remove the repetitive non-inferior solution.

%% remove repeated non-inferior solutions issame = zeros (cnt- 1 , 1 ); for i = 1 : cnt- 1 for j = i + 1 : cnt- 1 if ~issame( j ) issame( j ) = ( abs (fljx( j )-fljx( i )) < 0.0001 ); end end end flj ( find (issame == 1 )) = []; fljx( find (issame == 1 )) = []; Plot (bestxmerge, ymerge, 'BO' ); % particle group Videos optimum position Plot (fljx, FLJ, 'RO' ); % Videos Pareto position PAUSE ( 0.5 ); HOLD OFF; End copy the code